Jan 27 08:53:34 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 08:53:34 crc restorecon[4701]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:34 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 08:53:35 crc restorecon[4701]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 08:53:36 crc kubenswrapper[4985]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.172575 4985 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178807 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178839 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178848 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178856 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178862 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178870 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178876 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178884 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178892 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178902 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178911 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178919 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178932 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178939 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178946 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178953 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178960 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178970 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178979 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178985 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178991 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.178997 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179003 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179010 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179015 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179021 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179027 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179033 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179042 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179051 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179059 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179066 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179073 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179080 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179086 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179092 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179100 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179106 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179116 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179125 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179132 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179140 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179147 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179155 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179162 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179169 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179176 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179182 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179190 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179198 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179204 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179210 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179216 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179222 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179228 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179234 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179240 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179246 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179252 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179263 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179272 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179280 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179288 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179294 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179302 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179309 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179315 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179321 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179328 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179334 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.179341 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179541 4985 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179560 4985 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179576 4985 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179587 4985 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179597 4985 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179605 4985 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179617 4985 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179626 4985 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179634 4985 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179642 4985 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179652 4985 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179661 4985 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179669 4985 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179678 4985 flags.go:64] FLAG: --cgroup-root="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179686 4985 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179693 4985 flags.go:64] FLAG: --client-ca-file="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179701 4985 flags.go:64] FLAG: --cloud-config="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179708 4985 flags.go:64] FLAG: --cloud-provider="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179716 4985 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179726 4985 flags.go:64] FLAG: --cluster-domain="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179733 4985 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179740 4985 flags.go:64] FLAG: --config-dir="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179747 4985 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179755 4985 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179765 4985 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179773 4985 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179781 4985 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179790 4985 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179798 4985 flags.go:64] FLAG: --contention-profiling="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179805 4985 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179813 4985 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179822 4985 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179829 4985 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179839 4985 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179846 4985 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179855 4985 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179863 4985 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179871 4985 flags.go:64] FLAG: --enable-server="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179878 4985 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179889 4985 flags.go:64] FLAG: --event-burst="100" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179896 4985 flags.go:64] FLAG: --event-qps="50" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179904 4985 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179912 4985 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179919 4985 flags.go:64] FLAG: --eviction-hard="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179929 4985 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179936 4985 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179943 4985 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179951 4985 flags.go:64] FLAG: --eviction-soft="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179958 4985 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179968 4985 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179975 4985 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179983 4985 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179990 4985 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.179997 4985 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180004 4985 flags.go:64] FLAG: --feature-gates="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180013 4985 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180021 4985 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180028 4985 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180036 4985 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180044 4985 flags.go:64] FLAG: --healthz-port="10248" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180051 4985 flags.go:64] FLAG: --help="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180058 4985 flags.go:64] FLAG: --hostname-override="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180065 4985 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180073 4985 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180081 4985 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180088 4985 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180095 4985 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180105 4985 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180112 4985 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180120 4985 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180127 4985 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180134 4985 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180142 4985 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180149 4985 flags.go:64] FLAG: --kube-reserved="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180156 4985 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180163 4985 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180170 4985 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180177 4985 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180185 4985 flags.go:64] FLAG: --lock-file="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180191 4985 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180198 4985 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180206 4985 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180218 4985 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180225 4985 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180232 4985 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180240 4985 flags.go:64] FLAG: --logging-format="text" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180246 4985 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180254 4985 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180261 4985 flags.go:64] FLAG: --manifest-url="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180267 4985 flags.go:64] FLAG: --manifest-url-header="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180277 4985 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180285 4985 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180294 4985 flags.go:64] FLAG: --max-pods="110" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180302 4985 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180309 4985 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180319 4985 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180326 4985 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180334 4985 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180341 4985 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180361 4985 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180383 4985 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180390 4985 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180397 4985 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180405 4985 flags.go:64] FLAG: --pod-cidr="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180414 4985 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180425 4985 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180433 4985 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180441 4985 flags.go:64] FLAG: --pods-per-core="0" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180450 4985 flags.go:64] FLAG: --port="10250" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180458 4985 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180465 4985 flags.go:64] FLAG: --provider-id="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180472 4985 flags.go:64] FLAG: --qos-reserved="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180479 4985 flags.go:64] FLAG: --read-only-port="10255" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180487 4985 flags.go:64] FLAG: --register-node="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180494 4985 flags.go:64] FLAG: --register-schedulable="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180501 4985 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180544 4985 flags.go:64] FLAG: --registry-burst="10" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180549 4985 flags.go:64] FLAG: --registry-qps="5" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180555 4985 flags.go:64] FLAG: --reserved-cpus="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180561 4985 flags.go:64] FLAG: --reserved-memory="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180569 4985 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180576 4985 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180582 4985 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180588 4985 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180593 4985 flags.go:64] FLAG: --runonce="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180599 4985 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180605 4985 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180612 4985 flags.go:64] FLAG: --seccomp-default="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180617 4985 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180623 4985 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180629 4985 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180635 4985 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180644 4985 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180650 4985 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180656 4985 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180662 4985 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180668 4985 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180674 4985 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180679 4985 flags.go:64] FLAG: --system-cgroups="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180685 4985 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180694 4985 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180700 4985 flags.go:64] FLAG: --tls-cert-file="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180706 4985 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180713 4985 flags.go:64] FLAG: --tls-min-version="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180719 4985 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180725 4985 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180731 4985 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180736 4985 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180742 4985 flags.go:64] FLAG: --v="2" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180750 4985 flags.go:64] FLAG: --version="false" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180758 4985 flags.go:64] FLAG: --vmodule="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180765 4985 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.180771 4985 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180912 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180918 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180924 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180930 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180936 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180941 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180947 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180952 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180957 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180962 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180967 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180977 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180983 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180989 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180994 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.180999 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181004 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181009 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181015 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181020 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181024 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181030 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181036 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181043 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181050 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181055 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181060 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181065 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181072 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181079 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181085 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181091 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181096 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181101 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181106 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181111 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181116 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181121 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181126 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181131 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181143 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181148 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181153 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181161 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181166 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181171 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181176 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181181 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181186 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181190 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181195 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181202 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181207 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181213 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181221 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181228 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181235 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181242 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181249 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181255 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181262 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181268 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181274 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181280 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181286 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181292 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181299 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181305 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181311 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181317 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.181323 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.182278 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.198553 4985 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.198641 4985 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198810 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198843 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198854 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198864 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198874 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198884 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198893 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198902 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198910 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198919 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198927 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198938 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198948 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198959 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198969 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198979 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.198990 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199000 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199010 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199024 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199038 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199049 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199059 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199068 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199077 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199085 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199093 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199103 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199110 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199118 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199126 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199136 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199144 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199153 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199161 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199169 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199177 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199186 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199194 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199202 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199210 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199219 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199230 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199242 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199253 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199263 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199276 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199323 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199335 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199343 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199351 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199360 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199369 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199378 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199386 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199396 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199404 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199412 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199420 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199428 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199439 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199448 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199456 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199542 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199551 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199560 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199569 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199577 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199585 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199593 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199601 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.199615 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199896 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199925 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199937 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199949 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199960 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199969 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199978 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199986 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.199994 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200002 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200009 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200017 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200025 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200033 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200040 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200049 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200058 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200066 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200074 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200083 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200090 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200098 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200106 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200114 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200122 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200130 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200137 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200145 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200155 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200168 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200178 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200188 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200197 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200205 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200213 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200222 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200232 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200244 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200258 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200271 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200281 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200290 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200299 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200307 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200315 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200323 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200331 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200339 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200348 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200355 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200363 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200372 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200380 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200390 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200398 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200406 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200414 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200423 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200430 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200439 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200446 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200454 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200462 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200469 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200477 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200485 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200493 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200503 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200543 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200552 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.200564 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.200577 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.200901 4985 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.207853 4985 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.208012 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.220759 4985 server.go:997] "Starting client certificate rotation" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.220843 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.222570 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-03 20:24:55.350416624 +0000 UTC Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.222716 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.250710 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.253903 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.254285 4985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.269812 4985 log.go:25] "Validated CRI v1 runtime API" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.310747 4985 log.go:25] "Validated CRI v1 image API" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.312934 4985 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.319126 4985 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-08-48-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.319229 4985 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.357470 4985 manager.go:217] Machine: {Timestamp:2026-01-27 08:53:36.352722371 +0000 UTC m=+0.643817232 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:66a0621c-9cbd-4c42-8f6a-941d6ebd53fb BootID:095ded87-0bbb-47a5-b76f-f5bb300a00ab Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4e:70:4c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4e:70:4c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:17:40:19 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7d:16:a4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d0:6a:d5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d0:e2:78 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:5a:e4:26:9b:df Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:a0:bb:0c:c6:3d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.357740 4985 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.357948 4985 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.359149 4985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.359348 4985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.359396 4985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.359644 4985 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.359655 4985 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.360250 4985 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.360286 4985 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.361307 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.361500 4985 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.365980 4985 kubelet.go:418] "Attempting to sync node with API server" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.366011 4985 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.366041 4985 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.366060 4985 kubelet.go:324] "Adding apiserver pod source" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.366077 4985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.372545 4985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.373709 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.373792 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.373948 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.373986 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.375056 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.378237 4985 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380140 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380186 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380203 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380219 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380243 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380258 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380274 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380297 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380314 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380328 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380347 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.380363 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.381388 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.382265 4985 server.go:1280] "Started kubelet" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.383104 4985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.383131 4985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.384300 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.384419 4985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 08:53:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.384978 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.385024 4985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.385264 4985 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.385278 4985 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.385368 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.385431 4985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.385492 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:54:07.618954694 +0000 UTC Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386124 4985 factory.go:55] Registering systemd factory Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386182 4985 factory.go:221] Registration of the systemd container factory successfully Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.386107 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.386255 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386737 4985 factory.go:153] Registering CRI-O factory Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386779 4985 factory.go:221] Registration of the crio container factory successfully Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386876 4985 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386916 4985 factory.go:103] Registering Raw factory Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.386940 4985 manager.go:1196] Started watching for new ooms in manager Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.390668 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.391547 4985 server.go:460] "Adding debug handlers to kubelet server" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.393009 4985 manager.go:319] Starting recovery of all containers Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.392324 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e8a871516b64e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 08:53:36.382211662 +0000 UTC m=+0.673306533,LastTimestamp:2026-01-27 08:53:36.382211662 +0000 UTC m=+0.673306533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401303 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401392 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401416 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401437 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401459 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401481 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401502 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401566 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401590 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401608 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401627 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401650 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401673 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401696 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401719 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401741 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401763 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401784 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401805 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401825 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401858 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401881 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401904 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401926 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401945 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401964 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.401989 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402012 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402033 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402053 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402130 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402156 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402177 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402199 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402284 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402310 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402334 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402353 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402372 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402397 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402493 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402541 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402564 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402584 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402607 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402632 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402657 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402681 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.402701 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406365 4985 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406445 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406474 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406498 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406549 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406618 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406639 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406659 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406679 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406698 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406717 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406740 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406761 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406781 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406801 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406825 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406846 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406865 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406899 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406921 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406942 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406962 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.406986 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407007 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407026 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407062 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407084 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407104 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407122 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407140 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407162 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407182 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407202 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407221 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407241 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407260 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407280 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407300 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407319 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407340 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407360 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407381 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407403 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407428 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407450 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407474 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407495 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407539 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407561 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407580 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407600 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407620 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407641 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407660 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407688 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407712 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407774 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407795 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407818 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407839 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407861 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407882 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407908 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407926 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407948 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407967 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.407987 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408007 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408030 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408049 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408070 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408089 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408111 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408132 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408154 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408178 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408198 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408220 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408241 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408260 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408282 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408299 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408318 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408337 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408356 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408375 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408409 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408429 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408461 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408480 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408501 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408546 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408568 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408592 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408611 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408632 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408654 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408674 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408695 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408715 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408736 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408753 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408773 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408792 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408821 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408841 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408861 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408884 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408904 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408925 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408943 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408964 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.408986 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409006 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409028 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409046 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409065 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409083 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409103 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409123 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409143 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409165 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409188 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409210 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409231 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409250 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409271 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409290 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409312 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409333 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409352 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409374 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409395 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409414 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409436 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409457 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409479 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409502 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409566 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409589 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409609 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409630 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409649 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409669 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409690 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409710 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409731 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409753 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409774 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409794 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409823 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409842 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409861 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409881 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409900 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409920 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409939 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409957 4985 reconstruct.go:97] "Volume reconstruction finished" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.409971 4985 reconciler.go:26] "Reconciler: start to sync state" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.414072 4985 manager.go:324] Recovery completed Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.424415 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.426690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.426753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.426772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.427914 4985 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.427937 4985 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.427959 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.441556 4985 policy_none.go:49] "None policy: Start" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.442918 4985 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.442954 4985 state_mem.go:35] "Initializing new in-memory state store" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.449285 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.450634 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.450679 4985 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.450711 4985 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.450855 4985 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 08:53:36 crc kubenswrapper[4985]: W0127 08:53:36.451498 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.451576 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.486498 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.519106 4985 manager.go:334] "Starting Device Plugin manager" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.519173 4985 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.519187 4985 server.go:79] "Starting device plugin registration server" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.520225 4985 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.520247 4985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.520426 4985 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.520698 4985 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.520721 4985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.527471 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.551759 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.551957 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.553424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.553474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.553492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.553704 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.554750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.554771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.554782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.592331 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.620392 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.622024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.622071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.622081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.622120 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.622743 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.800076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.800584 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.800129 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.800168 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.800865 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.802137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.802166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.802174 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805403 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805597 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805729 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805920 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.805968 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.806650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.806680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.806690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.806820 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.807228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.807264 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.807740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.807779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.807790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.808377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.808408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.808418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.808612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.808638 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.809974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816193 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816239 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816292 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816399 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816422 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.816469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.823345 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.825242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.825276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.825289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.825326 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.825747 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.917979 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918130 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918285 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918292 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918327 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918421 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918329 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918481 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918552 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918777 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918811 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918927 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918868 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.918973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: I0127 08:53:36.919106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 08:53:36 crc kubenswrapper[4985]: E0127 08:53:36.993502 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020671 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020750 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.020996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.126967 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.134028 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.157575 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.174643 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a9b1e68b499653442a02afb6a91eae19b55199ad420f8c2427abf8f5e979869c WatchSource:0}: Error finding container a9b1e68b499653442a02afb6a91eae19b55199ad420f8c2427abf8f5e979869c: Status 404 returned error can't find the container with id a9b1e68b499653442a02afb6a91eae19b55199ad420f8c2427abf8f5e979869c Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.175222 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.175391 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-52ad9ee17dea9b16bc96c13343a7892a51c59da3a0954598eb5e1901a12f8d40 WatchSource:0}: Error finding container 52ad9ee17dea9b16bc96c13343a7892a51c59da3a0954598eb5e1901a12f8d40: Status 404 returned error can't find the container with id 52ad9ee17dea9b16bc96c13343a7892a51c59da3a0954598eb5e1901a12f8d40 Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.180856 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f5a803eaff30acb41ecb905d1b1ed3ce9cc832a4da45b12923faee7a0fbf6346 WatchSource:0}: Error finding container f5a803eaff30acb41ecb905d1b1ed3ce9cc832a4da45b12923faee7a0fbf6346: Status 404 returned error can't find the container with id f5a803eaff30acb41ecb905d1b1ed3ce9cc832a4da45b12923faee7a0fbf6346 Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.183161 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.204275 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c3bea8bea0cc40d232760c74ac00190b03164f8fb9ad7b0568d7488d913aba85 WatchSource:0}: Error finding container c3bea8bea0cc40d232760c74ac00190b03164f8fb9ad7b0568d7488d913aba85: Status 404 returned error can't find the container with id c3bea8bea0cc40d232760c74ac00190b03164f8fb9ad7b0568d7488d913aba85 Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.209753 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f4b3607d1cf2f859c790c9d2f91fd48adb10847700ce9313ca5ff23b5ac17013 WatchSource:0}: Error finding container f4b3607d1cf2f859c790c9d2f91fd48adb10847700ce9313ca5ff23b5ac17013: Status 404 returned error can't find the container with id f4b3607d1cf2f859c790c9d2f91fd48adb10847700ce9313ca5ff23b5ac17013 Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.226881 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.230588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.230654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.230666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.230704 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.231325 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.349962 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.350070 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.385331 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.386534 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:03:53.693850558 +0000 UTC Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.456067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9b1e68b499653442a02afb6a91eae19b55199ad420f8c2427abf8f5e979869c"} Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.457398 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"52ad9ee17dea9b16bc96c13343a7892a51c59da3a0954598eb5e1901a12f8d40"} Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.459002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f4b3607d1cf2f859c790c9d2f91fd48adb10847700ce9313ca5ff23b5ac17013"} Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.460921 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c3bea8bea0cc40d232760c74ac00190b03164f8fb9ad7b0568d7488d913aba85"} Jan 27 08:53:37 crc kubenswrapper[4985]: I0127 08:53:37.462657 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5a803eaff30acb41ecb905d1b1ed3ce9cc832a4da45b12923faee7a0fbf6346"} Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.536595 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.536686 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.598334 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e8a871516b64e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 08:53:36.382211662 +0000 UTC m=+0.673306533,LastTimestamp:2026-01-27 08:53:36.382211662 +0000 UTC m=+0.673306533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.611934 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.612034 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:37 crc kubenswrapper[4985]: W0127 08:53:37.668594 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.668682 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:37 crc kubenswrapper[4985]: E0127 08:53:37.794132 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.031951 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.034607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.034668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.034688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.034726 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:38 crc kubenswrapper[4985]: E0127 08:53:38.035481 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.386252 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.387260 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:46:08.718628511 +0000 UTC Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.444808 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 08:53:38 crc kubenswrapper[4985]: E0127 08:53:38.446404 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.468336 4985 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b51dcd937393136def091c9c0ce2415ad0b80eb6ff2dc9abe7bd7b55c089eb0f" exitCode=0 Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.468417 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b51dcd937393136def091c9c0ce2415ad0b80eb6ff2dc9abe7bd7b55c089eb0f"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.468460 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.470456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.470502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.470532 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.472886 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.472948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.475402 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2" exitCode=0 Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.475464 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.475806 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.477001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.477031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.477040 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.478464 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739" exitCode=0 Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.478540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.478633 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.479568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.479600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.479610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.481627 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3" exitCode=0 Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.481683 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3"} Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.481846 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.482417 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.482854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.482884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.482898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.483488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.483543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:38 crc kubenswrapper[4985]: I0127 08:53:38.483555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: W0127 08:53:39.048987 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:39 crc kubenswrapper[4985]: E0127 08:53:39.049114 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.386193 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.388281 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:45:16.686499998 +0000 UTC Jan 27 08:53:39 crc kubenswrapper[4985]: E0127 08:53:39.394952 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.487630 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"44571b0625a52928ae16d3c4a6c38c5514df6b028f7d838d429be1480d5b0e4e"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.487679 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.488867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.488906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.488921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.490932 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.490976 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.490987 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.491809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.491847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.491860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.493695 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.493686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.493781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.493820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.494489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.494526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.494540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.498000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.498030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.498045 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.499948 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704" exitCode=0 Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.500009 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.500004 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704"} Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.500744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.500782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.500795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.636077 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.637428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.637478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.637493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:39 crc kubenswrapper[4985]: I0127 08:53:39.637541 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:39 crc kubenswrapper[4985]: E0127 08:53:39.638033 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 27 08:53:40 crc kubenswrapper[4985]: W0127 08:53:40.072479 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 27 08:53:40 crc kubenswrapper[4985]: E0127 08:53:40.072606 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.388697 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:24:08.780842313 +0000 UTC Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.507753 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1"} Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.507841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63"} Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.507900 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.509130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.509188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.509201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511699 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008" exitCode=0 Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511810 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511845 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008"} Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511949 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.511979 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.512002 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513825 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.513984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.514919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.514948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.514957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:40 crc kubenswrapper[4985]: I0127 08:53:40.670416 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.389063 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:10:32.758695644 +0000 UTC Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8"} Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9"} Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805"} Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518903 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518935 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518972 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.518912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4"} Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.520180 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:41 crc kubenswrapper[4985]: I0127 08:53:41.750332 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.196613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.196903 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.199482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.200060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.200089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.207331 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.389910 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:54:29.972816451 +0000 UTC Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.532626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535"} Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.532713 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.532713 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.532855 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.532965 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.533778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.533807 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.533816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.533956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.534013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.534026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.534596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.534620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.534631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.823831 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.838651 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.840442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.840484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.840496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:42 crc kubenswrapper[4985]: I0127 08:53:42.840545 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.390814 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:23:59.572029146 +0000 UTC Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.536329 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.536399 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.536572 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.537852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.537905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.537919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.538027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.538077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:43 crc kubenswrapper[4985]: I0127 08:53:43.538094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.391907 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:45:11.53201336 +0000 UTC Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.415372 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.415681 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.417164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.417273 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.417301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.497466 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.539630 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.541185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.541263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:44 crc kubenswrapper[4985]: I0127 08:53:44.541284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.392829 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:49:33.091617235 +0000 UTC Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.567640 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.567924 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.569430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.569486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:45 crc kubenswrapper[4985]: I0127 08:53:45.569506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:46 crc kubenswrapper[4985]: I0127 08:53:46.393974 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:32:13.225809891 +0000 UTC Jan 27 08:53:46 crc kubenswrapper[4985]: E0127 08:53:46.527743 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.072458 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.072818 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.074669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.074740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.074764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.394422 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:17:38.253904954 +0000 UTC Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.935627 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.936014 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.937690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.937752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:47 crc kubenswrapper[4985]: I0127 08:53:47.937779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:48 crc kubenswrapper[4985]: I0127 08:53:48.395255 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:54:21.905560823 +0000 UTC Jan 27 08:53:48 crc kubenswrapper[4985]: I0127 08:53:48.567912 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:53:48 crc kubenswrapper[4985]: I0127 08:53:48.568027 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.014912 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.015401 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.016961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.017102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.017207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.020827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.395891 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:54:05.038998275 +0000 UTC Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.585002 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.586106 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.586143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:53:49 crc kubenswrapper[4985]: I0127 08:53:49.586154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.386358 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.397531 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:01:20.270171399 +0000 UTC Jan 27 08:53:50 crc kubenswrapper[4985]: W0127 08:53:50.677844 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.678007 4985 trace.go:236] Trace[1914905175]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 08:53:40.676) (total time: 10001ms): Jan 27 08:53:50 crc kubenswrapper[4985]: Trace[1914905175]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:53:50.677) Jan 27 08:53:50 crc kubenswrapper[4985]: Trace[1914905175]: [10.001480394s] [10.001480394s] END Jan 27 08:53:50 crc kubenswrapper[4985]: E0127 08:53:50.678044 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 08:53:50 crc kubenswrapper[4985]: W0127 08:53:50.699583 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.699721 4985 trace.go:236] Trace[836007999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 08:53:40.692) (total time: 10007ms): Jan 27 08:53:50 crc kubenswrapper[4985]: Trace[836007999]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (08:53:50.699) Jan 27 08:53:50 crc kubenswrapper[4985]: Trace[836007999]: [10.007186511s] [10.007186511s] END Jan 27 08:53:50 crc kubenswrapper[4985]: E0127 08:53:50.699758 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.842627 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.842713 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.848617 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 08:53:50 crc kubenswrapper[4985]: I0127 08:53:50.848716 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 08:53:51 crc kubenswrapper[4985]: I0127 08:53:51.398274 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:11:54.290225929 +0000 UTC Jan 27 08:53:51 crc kubenswrapper[4985]: I0127 08:53:51.759259 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]log ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]etcd ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/priority-and-fairness-filter ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-apiextensions-informers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-apiextensions-controllers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/crd-informer-synced ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-system-namespaces-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 27 08:53:51 crc kubenswrapper[4985]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/bootstrap-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/start-kube-aggregator-informers ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-registration-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-discovery-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]autoregister-completion ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-openapi-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 27 08:53:51 crc kubenswrapper[4985]: livez check failed Jan 27 08:53:51 crc kubenswrapper[4985]: I0127 08:53:51.759375 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:53:52 crc kubenswrapper[4985]: I0127 08:53:52.398976 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:56:54.427680099 +0000 UTC Jan 27 08:53:53 crc kubenswrapper[4985]: I0127 08:53:53.399503 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:51:09.051747699 +0000 UTC Jan 27 08:53:54 crc kubenswrapper[4985]: I0127 08:53:54.400380 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:06:26.677807377 +0000 UTC Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.401552 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:32:59.596647875 +0000 UTC Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.557205 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.793533 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 08:53:55 crc kubenswrapper[4985]: E0127 08:53:55.832645 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.834858 4985 trace.go:236] Trace[1583161942]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 08:53:42.638) (total time: 13196ms): Jan 27 08:53:55 crc kubenswrapper[4985]: Trace[1583161942]: ---"Objects listed" error: 13196ms (08:53:55.834) Jan 27 08:53:55 crc kubenswrapper[4985]: Trace[1583161942]: [13.196356692s] [13.196356692s] END Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.834904 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.835138 4985 trace.go:236] Trace[1123946275]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 08:53:45.441) (total time: 10393ms): Jan 27 08:53:55 crc kubenswrapper[4985]: Trace[1123946275]: ---"Objects listed" error: 10393ms (08:53:55.835) Jan 27 08:53:55 crc kubenswrapper[4985]: Trace[1123946275]: [10.39392152s] [10.39392152s] END Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.835154 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.837415 4985 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 08:53:55 crc kubenswrapper[4985]: E0127 08:53:55.837498 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.847889 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.889120 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.889205 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.905229 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:55 crc kubenswrapper[4985]: I0127 08:53:55.909880 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.376402 4985 apiserver.go:52] "Watching apiserver" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.388157 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.388581 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.389235 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.389374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.389431 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.389474 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.389597 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.389645 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.390000 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.390008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.390172 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.392097 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.392314 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.393054 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.393962 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.394045 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.394102 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.394188 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.394558 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.394740 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.401989 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:30:50.362065968 +0000 UTC Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.425202 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.452006 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.481809 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.486216 4985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.498525 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.509363 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.523691 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.542481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.542796 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.542910 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543004 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543186 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543212 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543240 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543266 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543289 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543275 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543308 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543337 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543366 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.543422 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.544617 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.544901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.544928 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.545358 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.545074 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.545606 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.545663 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.545919 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546078 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546153 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546420 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546475 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546579 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546683 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546594 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546818 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.546904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547051 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547100 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547246 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547277 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547661 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547812 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547859 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547902 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547904 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547935 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547976 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.547980 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548129 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548362 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548171 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548523 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548575 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548620 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548662 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548702 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548784 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548820 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548849 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548906 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548964 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.548992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549019 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549044 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549095 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549117 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549143 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549192 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549223 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549356 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549415 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549573 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549620 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550124 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550410 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550434 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550596 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550628 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550651 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550840 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550870 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549052 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.549963 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550110 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552118 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550621 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552944 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552950 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551505 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551733 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.551837 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.553077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.550658 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.553402 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.552226 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554103 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554207 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554267 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554623 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554737 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554830 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.554980 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555100 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555165 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555223 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555338 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555369 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555387 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555400 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.556105 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.558919 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.559739 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.560194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.560196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.560293 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.560551 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561119 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561298 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561644 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561836 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.561840 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.562111 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.562579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.562748 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.563372 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.563424 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.563976 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.564261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.565220 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.565755 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.565875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.566644 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.566861 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.568227 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.568071 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.569088 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.569128 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.569978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.569971 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.570639 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.570966 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.570874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571255 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571262 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571561 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571618 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571736 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571758 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571786 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571817 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571842 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571933 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.571989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572019 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572068 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572115 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572137 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572185 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572208 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572217 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572252 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572277 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572322 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572346 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572368 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572421 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572445 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572497 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572578 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572647 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572759 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572840 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572915 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.572993 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573071 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573129 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573162 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573304 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573337 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573637 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573765 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573842 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.573917 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574088 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574206 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574306 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574428 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574755 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574844 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574922 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.574964 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575040 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575153 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575225 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575301 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575469 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575583 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575661 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575701 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575770 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.575966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576024 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576051 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576105 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576135 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576189 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576218 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576267 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576302 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576377 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576403 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576456 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576565 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580313 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580402 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580434 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580496 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580611 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580820 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580853 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581281 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581350 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581470 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581503 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581748 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581816 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.576751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581843 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582085 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582329 4985 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582648 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582655 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.577778 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.555955 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578249 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578314 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578568 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.578981 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.579589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.579641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.579938 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.579794 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580870 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580942 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582811 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.582900 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.580955 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581041 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581065 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581285 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581284 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.581684 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.581824 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.583135 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:57.083110435 +0000 UTC m=+21.374205276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.583109 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.584603 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.584705 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585179 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585537 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585563 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585579 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.577658 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585870 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.585868 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.586079 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.586102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.586177 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.586708 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.587051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.587307 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.587862 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.587860 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588080 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588215 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588348 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588447 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588845 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.588901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.589146 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:57.089012505 +0000 UTC m=+21.380107526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589563 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589612 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589928 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589970 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.590028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.590126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.590257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.590711 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.590843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.591127 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589147 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.589437 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.591547 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.591572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.591984 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592014 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592241 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592329 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592383 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592397 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592484 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592501 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592541 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592558 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592575 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592588 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592601 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592614 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592627 4985 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592639 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592652 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592663 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592676 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592688 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592701 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592714 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592728 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592756 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592770 4985 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592782 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592794 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592806 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592818 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592829 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592841 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592854 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592867 4985 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592879 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592892 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592905 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592916 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592928 4985 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592940 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592952 4985 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592964 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592977 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.592990 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593007 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593022 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593034 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593045 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593058 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593071 4985 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593082 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593094 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593107 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593119 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593131 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593143 4985 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593155 4985 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593168 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593181 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593193 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593206 4985 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593218 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593231 4985 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593243 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593255 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593267 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593280 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593292 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593305 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593321 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593337 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593352 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593371 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593388 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593402 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593414 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593429 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593445 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593461 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593476 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593492 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593506 4985 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593611 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593629 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593641 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593666 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593679 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.593926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.595683 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:53:57.095659414 +0000 UTC m=+21.386754265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.596501 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.596667 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.597640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.597709 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.597766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.598159 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.598780 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.598806 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.598864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.600891 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.600929 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.600951 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.601035 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:57.101007374 +0000 UTC m=+21.392102415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.601154 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.601184 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.601202 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.601265 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:57.101240046 +0000 UTC m=+21.392334887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.602133 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.602803 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.605347 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.607733 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.608154 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.610147 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1" exitCode=255 Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.610985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1"} Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.611213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.612671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.613383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.614656 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.615566 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: E0127 08:53:56.617818 4985 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.623749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.623801 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624236 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624258 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624553 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624639 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624636 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624691 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.624806 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.625615 4985 scope.go:117] "RemoveContainer" containerID="b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.625762 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.625823 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.625862 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626076 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626096 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626338 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626347 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626364 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.626948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.627039 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.628763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.630205 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.630216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.630275 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.630567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.630884 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.631414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.631648 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.631715 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.633029 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.639283 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.643931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.644681 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.652971 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.657478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.665944 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.678999 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.691501 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694254 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694266 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694284 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694295 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694305 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694316 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694326 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694362 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694376 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694387 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694401 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694415 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694428 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694441 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694455 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694467 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694467 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694482 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.694559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695203 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695285 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695300 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695313 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695324 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695336 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695347 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695358 4985 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695368 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695379 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695389 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695401 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695420 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695430 4985 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695440 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695452 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695461 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695472 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695484 4985 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695494 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695504 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695540 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695551 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695561 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695571 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695581 4985 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695592 4985 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695605 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695619 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695639 4985 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695655 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695672 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695685 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695699 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695713 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695727 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695738 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695749 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695759 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695770 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695784 4985 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695797 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695810 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695826 4985 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695839 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695851 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695862 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695871 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695883 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695893 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695905 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695916 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695925 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695935 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695949 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695958 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695968 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695980 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.695993 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696002 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696014 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696025 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696035 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696045 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696057 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696067 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696076 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696085 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696095 4985 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696105 4985 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696116 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696129 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696138 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696147 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696157 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696168 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696178 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696190 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696200 4985 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696210 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696220 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696230 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696239 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696250 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696261 4985 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696271 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696281 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696291 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696301 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696312 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696321 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696332 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.696341 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.702040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.706088 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.713134 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.713339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.721041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 08:53:56 crc kubenswrapper[4985]: W0127 08:53:56.725092 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d35089ece2c9718a265859c11fc74c96175028d066237eebe8d10c2e132d9d27 WatchSource:0}: Error finding container d35089ece2c9718a265859c11fc74c96175028d066237eebe8d10c2e132d9d27: Status 404 returned error can't find the container with id d35089ece2c9718a265859c11fc74c96175028d066237eebe8d10c2e132d9d27 Jan 27 08:53:56 crc kubenswrapper[4985]: W0127 08:53:56.743156 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6be2c5680376de2ec7e903179a08487e6d5ec477fdef64d00c766411a3350284 WatchSource:0}: Error finding container 6be2c5680376de2ec7e903179a08487e6d5ec477fdef64d00c766411a3350284: Status 404 returned error can't find the container with id 6be2c5680376de2ec7e903179a08487e6d5ec477fdef64d00c766411a3350284 Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.760631 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.773427 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.789032 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.805831 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.819860 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.833316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.843893 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.858682 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:56 crc kubenswrapper[4985]: I0127 08:53:56.870489 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.099045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.099168 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.099236 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.099369 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:53:58.099326615 +0000 UTC m=+22.390421496 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.099391 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.099386 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.099452 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:58.099439551 +0000 UTC m=+22.390534402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.099546 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:58.099500684 +0000 UTC m=+22.390595525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.200077 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.200127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200273 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200290 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200303 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200367 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:58.200351187 +0000 UTC m=+22.491446028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200463 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200550 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200576 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:57 crc kubenswrapper[4985]: E0127 08:53:57.200681 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:53:58.200648503 +0000 UTC m=+22.491743384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.403148 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:10:42.202577018 +0000 UTC Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.615274 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.617046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.617372 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.619313 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.619340 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.619350 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6be2c5680376de2ec7e903179a08487e6d5ec477fdef64d00c766411a3350284"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.620307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b9686594608a2afd128d77e731d24f6bf3bb47be9cc417319e167b4ca014b073"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.621903 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.621937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d35089ece2c9718a265859c11fc74c96175028d066237eebe8d10c2e132d9d27"} Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.622330 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.637457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.655286 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.670003 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.683760 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.697697 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.723286 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.749949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.772982 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.789590 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.802973 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.819376 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.832372 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.846966 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.866705 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.885830 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.901937 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.962187 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.978675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.980234 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.984441 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:57 crc kubenswrapper[4985]: I0127 08:53:57.999446 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.012490 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.045208 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.080175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.109483 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.109693 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:54:00.109653937 +0000 UTC m=+24.400748778 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.109896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.109929 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.110012 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.110075 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:00.11005963 +0000 UTC m=+24.401154471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.110125 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.110245 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:00.110221288 +0000 UTC m=+24.401316139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.110890 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.128904 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.144074 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.161355 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.190817 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.204840 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.210183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.210213 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210345 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210361 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210374 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210414 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:00.210402224 +0000 UTC m=+24.501497065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210422 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210456 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210469 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.210542 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:00.21052162 +0000 UTC m=+24.501616461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.222986 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.236600 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.248974 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.262629 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.276530 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.290977 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:58Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.404035 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:13:25.685228362 +0000 UTC Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.451817 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.451867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.451874 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.451981 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.452126 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:53:58 crc kubenswrapper[4985]: E0127 08:53:58.452385 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.459139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.459899 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.461497 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.462328 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.463758 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.464560 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.465298 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.466534 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.467381 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.468743 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.469436 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.471083 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.471778 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.472437 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.473676 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.474377 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.475793 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.476280 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.477037 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.478418 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.479159 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.480487 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.481105 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.482582 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.483197 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.484046 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.485531 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.486176 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.487490 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.488198 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.489500 4985 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.489798 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.491942 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.493153 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.493972 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.496110 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.496989 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.498178 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.499769 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.501200 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.501947 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.503229 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.504142 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.505437 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.506150 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.507342 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.508051 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.509501 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.510179 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.511275 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.511949 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.513254 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.514136 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 08:53:58 crc kubenswrapper[4985]: I0127 08:53:58.514783 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.404911 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:15:59.897414137 +0000 UTC Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.637092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582"} Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.664559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.685358 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.705202 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.722744 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.740993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.757350 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.774170 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.818896 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:53:59 crc kubenswrapper[4985]: I0127 08:53:59.845905 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:53:59Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.130162 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.130293 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.130330 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.130425 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.130432 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:54:04.130394597 +0000 UTC m=+28.421489438 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.130503 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:04.130482772 +0000 UTC m=+28.421577673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.130633 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.130827 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:04.130796949 +0000 UTC m=+28.421891850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.231356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.231400 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231581 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231600 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231611 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231645 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231706 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231723 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231670 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:04.231652672 +0000 UTC m=+28.522747513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.231826 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:04.231790599 +0000 UTC m=+28.522885470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.405955 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:22:02.748124044 +0000 UTC Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.451909 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.452009 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.452117 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:00 crc kubenswrapper[4985]: I0127 08:54:00.452247 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.452452 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:00 crc kubenswrapper[4985]: E0127 08:54:00.452683 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:01 crc kubenswrapper[4985]: I0127 08:54:01.406959 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:41:37.250567661 +0000 UTC Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.237747 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.240121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.240169 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.240183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.240264 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.251847 4985 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.252231 4985 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.253752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.253808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.253829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.253853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.253868 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.279305 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.287965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.288009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.288022 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.288042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.288054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.301629 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.310234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.310305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.310323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.310350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.310367 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.341081 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.346289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.346336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.346349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.346369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.346380 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.382702 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.383121 4985 csr.go:261] certificate signing request csr-dzst5 is approved, waiting to be issued Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.390385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.390423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.390435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.390454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.390468 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.403056 4985 csr.go:257] certificate signing request csr-dzst5 is issued Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.406673 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.406864 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.408444 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:04:01.198837733 +0000 UTC Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.408957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.408976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.408984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.409002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.409012 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.452198 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.452333 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.452556 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.452564 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.452729 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:02 crc kubenswrapper[4985]: E0127 08:54:02.452746 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.455766 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5z8px"] Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.456127 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.459697 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.464089 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.466439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.486709 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.503526 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.514915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.514962 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.514975 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.515009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.515023 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.521367 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.531923 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.548943 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.552567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7997cb84-9997-4cf4-8794-2eb145a5c324-hosts-file\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.552671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwj4\" (UniqueName: \"kubernetes.io/projected/7997cb84-9997-4cf4-8794-2eb145a5c324-kube-api-access-pkwj4\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.569552 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.585786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.602118 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.615831 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.617370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.617404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.617414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.617435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.617448 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.629619 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.654051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkwj4\" (UniqueName: \"kubernetes.io/projected/7997cb84-9997-4cf4-8794-2eb145a5c324-kube-api-access-pkwj4\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.654114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7997cb84-9997-4cf4-8794-2eb145a5c324-hosts-file\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.654238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7997cb84-9997-4cf4-8794-2eb145a5c324-hosts-file\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.692197 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkwj4\" (UniqueName: \"kubernetes.io/projected/7997cb84-9997-4cf4-8794-2eb145a5c324-kube-api-access-pkwj4\") pod \"node-resolver-5z8px\" (UID: \"7997cb84-9997-4cf4-8794-2eb145a5c324\") " pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.719545 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.719591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.719603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.719622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.719636 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.769645 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5z8px" Jan 27 08:54:02 crc kubenswrapper[4985]: W0127 08:54:02.785795 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7997cb84_9997_4cf4_8794_2eb145a5c324.slice/crio-d0d1549aa93b15a09c267a5cec28b37c30d210848c7c357aaea00bf169911fca WatchSource:0}: Error finding container d0d1549aa93b15a09c267a5cec28b37c30d210848c7c357aaea00bf169911fca: Status 404 returned error can't find the container with id d0d1549aa93b15a09c267a5cec28b37c30d210848c7c357aaea00bf169911fca Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.822340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.822392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.822406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.822425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.822442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.894218 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lp9n5"] Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.894684 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.896523 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cqdrf"] Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.899427 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.903617 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rfnvj"] Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.904729 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.921253 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.921301 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.921584 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.921726 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.921838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926133 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926334 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926535 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926621 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926646 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926843 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.926892 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.927683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.927725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.927739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.927760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.927773 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:02Z","lastTransitionTime":"2026-01-27T08:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.956894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-binary-copy\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.956936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c066dd2f-48d4-4f4f-935d-0e772678e610-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.956960 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-os-release\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.956986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-k8s-cni-cncf-io\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957012 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-socket-dir-parent\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957029 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-multus\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957048 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-cnibin\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-conf-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957302 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-hostroot\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-system-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-cnibin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957404 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-os-release\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-daemon-config\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c066dd2f-48d4-4f4f-935d-0e772678e610-proxy-tls\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtb6\" (UniqueName: \"kubernetes.io/projected/c066dd2f-48d4-4f4f-935d-0e772678e610-kube-api-access-7vtb6\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgfj\" (UniqueName: \"kubernetes.io/projected/ea1cd1b8-a185-461d-9302-aa03be205225-kube-api-access-gfgfj\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-cni-binary-copy\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-etc-kubernetes\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258cj\" (UniqueName: \"kubernetes.io/projected/1ddda14a-730e-4c1f-afea-07c95221ba04-kube-api-access-258cj\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957605 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c066dd2f-48d4-4f4f-935d-0e772678e610-rootfs\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-bin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-kubelet\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957668 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-multus-certs\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-system-cni-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.957698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-netns\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.961781 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:02 crc kubenswrapper[4985]: I0127 08:54:02.994832 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:02Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.030385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.030434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.030443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.030467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.030478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.036692 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.058910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-binary-copy\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.058953 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c066dd2f-48d4-4f4f-935d-0e772678e610-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.058986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-os-release\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-k8s-cni-cncf-io\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059040 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-socket-dir-parent\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-multus\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059072 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-cnibin\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059091 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-conf-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059107 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059125 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-hostroot\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-system-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-cnibin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059185 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-os-release\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059203 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-k8s-cni-cncf-io\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059246 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-hostroot\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-daemon-config\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c066dd2f-48d4-4f4f-935d-0e772678e610-proxy-tls\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-system-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059413 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtb6\" (UniqueName: \"kubernetes.io/projected/c066dd2f-48d4-4f4f-935d-0e772678e610-kube-api-access-7vtb6\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059408 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-socket-dir-parent\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059455 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgfj\" (UniqueName: \"kubernetes.io/projected/ea1cd1b8-a185-461d-9302-aa03be205225-kube-api-access-gfgfj\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059523 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-cni-binary-copy\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059544 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-etc-kubernetes\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258cj\" (UniqueName: \"kubernetes.io/projected/1ddda14a-730e-4c1f-afea-07c95221ba04-kube-api-access-258cj\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059596 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c066dd2f-48d4-4f4f-935d-0e772678e610-rootfs\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-bin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059658 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-kubelet\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059693 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-multus-certs\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-cnibin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059718 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-system-cni-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-netns\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-binary-copy\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-netns\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c066dd2f-48d4-4f4f-935d-0e772678e610-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-cni-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-os-release\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059827 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-cnibin\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059919 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-conf-dir\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.059201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-os-release\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060167 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-multus-daemon-config\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-kubelet\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060228 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-etc-kubernetes\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060251 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c066dd2f-48d4-4f4f-935d-0e772678e610-rootfs\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060280 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-run-multus-certs\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060299 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-bin\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060326 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-system-cni-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea1cd1b8-a185-461d-9302-aa03be205225-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ddda14a-730e-4c1f-afea-07c95221ba04-cni-binary-copy\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060440 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea1cd1b8-a185-461d-9302-aa03be205225-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.060563 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1ddda14a-730e-4c1f-afea-07c95221ba04-host-var-lib-cni-multus\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.065739 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c066dd2f-48d4-4f4f-935d-0e772678e610-proxy-tls\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.074377 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.087039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgfj\" (UniqueName: \"kubernetes.io/projected/ea1cd1b8-a185-461d-9302-aa03be205225-kube-api-access-gfgfj\") pod \"multus-additional-cni-plugins-rfnvj\" (UID: \"ea1cd1b8-a185-461d-9302-aa03be205225\") " pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.096287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258cj\" (UniqueName: \"kubernetes.io/projected/1ddda14a-730e-4c1f-afea-07c95221ba04-kube-api-access-258cj\") pod \"multus-cqdrf\" (UID: \"1ddda14a-730e-4c1f-afea-07c95221ba04\") " pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.096933 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.103262 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtb6\" (UniqueName: \"kubernetes.io/projected/c066dd2f-48d4-4f4f-935d-0e772678e610-kube-api-access-7vtb6\") pod \"machine-config-daemon-lp9n5\" (UID: \"c066dd2f-48d4-4f4f-935d-0e772678e610\") " pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.130143 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.133010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.133044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.133054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.133074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.133087 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.151737 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.177404 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.201928 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.215619 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.218769 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.227143 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.230988 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cqdrf" Jan 27 08:54:03 crc kubenswrapper[4985]: W0127 08:54:03.231201 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc066dd2f_48d4_4f4f_935d_0e772678e610.slice/crio-018beb4b91f28e4337a43672c419a60fc2f785450096832154b7d0c231b0c789 WatchSource:0}: Error finding container 018beb4b91f28e4337a43672c419a60fc2f785450096832154b7d0c231b0c789: Status 404 returned error can't find the container with id 018beb4b91f28e4337a43672c419a60fc2f785450096832154b7d0c231b0c789 Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.234714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.234751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.234766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.234791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.234805 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.251341 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.253548 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.271309 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: W0127 08:54:03.275269 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea1cd1b8_a185_461d_9302_aa03be205225.slice/crio-b7d85b5953520e6218d39552fbdb813f85b3d3b3b75dad221ddaf0ea5d7f5ae3 WatchSource:0}: Error finding container b7d85b5953520e6218d39552fbdb813f85b3d3b3b75dad221ddaf0ea5d7f5ae3: Status 404 returned error can't find the container with id b7d85b5953520e6218d39552fbdb813f85b3d3b3b75dad221ddaf0ea5d7f5ae3 Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.289160 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.302831 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.318287 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.322966 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kqdf4"] Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.323913 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.326536 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.327274 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.327367 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.327705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.327902 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.328023 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.329492 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.339008 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.341149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.341200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.341216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.341236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.341250 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.351822 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366195 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366282 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366335 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366360 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqq2\" (UniqueName: \"kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366498 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366542 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366766 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366822 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366569 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.366957 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.385869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.402691 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.404803 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 08:49:02 +0000 UTC, rotation deadline is 2026-10-12 07:00:27.006499963 +0000 UTC Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.405026 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6190h6m23.601479015s for next certificate rotation Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.408991 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:58:02.750350818 +0000 UTC Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.419822 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.439843 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.444960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.445015 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.445026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.445048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.445123 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.463212 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468316 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468405 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468459 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468540 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqq2\" (UniqueName: \"kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468609 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468627 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468662 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468683 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468748 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468771 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468817 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.468944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469318 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469381 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469766 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469851 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.469885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470105 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470266 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470341 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470735 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.470925 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.474412 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.479822 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.490055 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqq2\" (UniqueName: \"kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2\") pod \"ovnkube-node-kqdf4\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.497332 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.510825 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.525881 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.541990 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.547774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.548001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.548109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.548216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.548283 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.560860 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.573869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.587728 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.601722 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.615359 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.628192 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.636964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.643612 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.650019 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.650053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.650064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.650082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.650092 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.651216 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.651266 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.651282 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"018beb4b91f28e4337a43672c419a60fc2f785450096832154b7d0c231b0c789"} Jan 27 08:54:03 crc kubenswrapper[4985]: W0127 08:54:03.652226 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6239c91_d93d_4db8_ac4b_d44ddbc7c100.slice/crio-4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a WatchSource:0}: Error finding container 4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a: Status 404 returned error can't find the container with id 4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.652428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5z8px" event={"ID":"7997cb84-9997-4cf4-8794-2eb145a5c324","Type":"ContainerStarted","Data":"520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.652531 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5z8px" event={"ID":"7997cb84-9997-4cf4-8794-2eb145a5c324","Type":"ContainerStarted","Data":"d0d1549aa93b15a09c267a5cec28b37c30d210848c7c357aaea00bf169911fca"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.654223 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerStarted","Data":"3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.654312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerStarted","Data":"b7d85b5953520e6218d39552fbdb813f85b3d3b3b75dad221ddaf0ea5d7f5ae3"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.656606 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerStarted","Data":"c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.656662 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerStarted","Data":"d3ab135546415c8d952fe28ffaf1cc3a71cc55d8c251a642db76a824e1caaa2c"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.665731 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.677675 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.688732 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.702939 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.722078 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.737477 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.751428 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.752242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.752429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.752534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.752635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.752702 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.766735 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.782401 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.800614 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.815131 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.829201 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.848456 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.858551 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.858617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.858635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.858661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.858683 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.871666 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.905452 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.939689 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:03Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.961208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.961247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.961256 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.961273 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:03 crc kubenswrapper[4985]: I0127 08:54:03.961284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:03Z","lastTransitionTime":"2026-01-27T08:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.064523 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.064574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.064587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.064609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.064619 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.166769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.166820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.166828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.166844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.166858 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.176284 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.176533 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:54:12.176483386 +0000 UTC m=+36.467578227 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.176575 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.176608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.176701 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.176741 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.176777 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:12.176755201 +0000 UTC m=+36.467850042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.176798 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:12.176790043 +0000 UTC m=+36.467885084 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.269550 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.269595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.269606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.269629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.269648 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.277152 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.277188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277334 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277350 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277361 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277410 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:12.277395782 +0000 UTC m=+36.568490623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277798 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277813 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277823 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.277853 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:12.277846466 +0000 UTC m=+36.568941307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.374052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.374419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.374431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.374453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.374471 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.409577 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:51:38.596851705 +0000 UTC Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.451716 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.451813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.451835 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.451900 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.452026 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:04 crc kubenswrapper[4985]: E0127 08:54:04.452092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.477431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.477481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.477492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.477534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.477552 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.580298 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.580346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.580358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.580380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.580393 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.667391 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d" exitCode=0 Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.667466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.670556 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" exitCode=0 Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.670970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.671019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.683088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.683135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.683144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.683162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.683173 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.688427 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.704407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.728443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.747325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.762042 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.777725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.791674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.791740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.791753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.791778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.791794 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.792983 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.810261 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.831487 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.844973 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.859912 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.875760 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.888204 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.894621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.894656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.894666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.894686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.894698 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.904183 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.917303 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.934857 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.966583 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.981015 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.995873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.998603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.998635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.998673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.998718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:04 crc kubenswrapper[4985]: I0127 08:54:04.998737 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:04Z","lastTransitionTime":"2026-01-27T08:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.010913 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.039404 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.059275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.073997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.091299 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.101016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.101081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.101094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.101114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.101690 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.106388 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.120796 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.137115 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.156177 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.204355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.204634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.204732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.204803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.204869 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.307361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.307399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.307409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.307428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.307441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.409838 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:45:29.326628902 +0000 UTC Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.410311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.410357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.410375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.410396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.410409 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.513320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.513377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.513389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.513409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.513426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.616459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.616524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.616535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.616558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.616569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678808 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678901 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678916 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.678949 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.681131 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d" exitCode=0 Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.681195 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.697084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.715982 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.720674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.720719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.720731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.720752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.720767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.733733 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.749322 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.763652 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.787344 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.802282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.818745 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.823056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.823088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.823096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.823114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.823124 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.837672 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.859460 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.880626 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.917887 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.926652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.926812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.926911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.927016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.927117 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:05Z","lastTransitionTime":"2026-01-27T08:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.929243 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.945289 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dlccz"] Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.946082 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.948134 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.948361 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.948145 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.948599 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.958076 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.976764 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.991448 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.994317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpggh\" (UniqueName: \"kubernetes.io/projected/7ba17902-809a-4efc-9a8c-6f9b611c2af9-kube-api-access-hpggh\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.994382 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ba17902-809a-4efc-9a8c-6f9b611c2af9-serviceca\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:05 crc kubenswrapper[4985]: I0127 08:54:05.994400 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ba17902-809a-4efc-9a8c-6f9b611c2af9-host\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.009053 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.021003 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.029936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.029974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.029984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.030004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.030018 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.035789 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.047159 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.059474 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.074786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.087901 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.095806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpggh\" (UniqueName: \"kubernetes.io/projected/7ba17902-809a-4efc-9a8c-6f9b611c2af9-kube-api-access-hpggh\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.095876 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ba17902-809a-4efc-9a8c-6f9b611c2af9-serviceca\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.095903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ba17902-809a-4efc-9a8c-6f9b611c2af9-host\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.095979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ba17902-809a-4efc-9a8c-6f9b611c2af9-host\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.097137 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ba17902-809a-4efc-9a8c-6f9b611c2af9-serviceca\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.107096 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.120123 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpggh\" (UniqueName: \"kubernetes.io/projected/7ba17902-809a-4efc-9a8c-6f9b611c2af9-kube-api-access-hpggh\") pod \"node-ca-dlccz\" (UID: \"7ba17902-809a-4efc-9a8c-6f9b611c2af9\") " pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.131700 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.132672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.132712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.132729 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.132755 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.132772 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.145415 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.157439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.175931 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.191873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.210185 4985 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 08:54:06 crc kubenswrapper[4985]: W0127 08:54:06.210805 4985 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 27 08:54:06 crc kubenswrapper[4985]: W0127 08:54:06.212229 4985 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 08:54:06 crc kubenswrapper[4985]: W0127 08:54:06.212845 4985 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 08:54:06 crc kubenswrapper[4985]: W0127 08:54:06.213029 4985 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.235702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.235739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.235747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.235763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.235776 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.265707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlccz" Jan 27 08:54:06 crc kubenswrapper[4985]: W0127 08:54:06.281293 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ba17902_809a_4efc_9a8c_6f9b611c2af9.slice/crio-e9bd466307f4560dbe4cb63cbeb18512d79d8c3a02378fa52d9d6ce51eb93eee WatchSource:0}: Error finding container e9bd466307f4560dbe4cb63cbeb18512d79d8c3a02378fa52d9d6ce51eb93eee: Status 404 returned error can't find the container with id e9bd466307f4560dbe4cb63cbeb18512d79d8c3a02378fa52d9d6ce51eb93eee Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.338661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.338715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.338725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.338742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.338771 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.410216 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:06:21.188829876 +0000 UTC Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.442025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.442059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.442070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.442087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.442099 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.451458 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.451536 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:06 crc kubenswrapper[4985]: E0127 08:54:06.452709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.453175 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:06 crc kubenswrapper[4985]: E0127 08:54:06.453354 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:06 crc kubenswrapper[4985]: E0127 08:54:06.453470 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.464778 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.477796 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.493712 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.509711 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.527987 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.544359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.544399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.544411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.544430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.544440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.548416 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.570719 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.583222 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.601884 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.617316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.631314 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647441 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.647786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.661598 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.674165 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.685969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlccz" event={"ID":"7ba17902-809a-4efc-9a8c-6f9b611c2af9","Type":"ContainerStarted","Data":"00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.686351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlccz" event={"ID":"7ba17902-809a-4efc-9a8c-6f9b611c2af9","Type":"ContainerStarted","Data":"e9bd466307f4560dbe4cb63cbeb18512d79d8c3a02378fa52d9d6ce51eb93eee"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.688588 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1" exitCode=0 Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.688639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.694477 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.712443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.727196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.744059 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.750446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.750639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.750655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.750811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.750831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.757037 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.781364 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.796452 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.810106 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.823338 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.837035 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.854285 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.854328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.854341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.854360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.854370 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.867866 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.905360 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.946398 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.957382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.957412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.957422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.957439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.957451 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:06Z","lastTransitionTime":"2026-01-27T08:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:06 crc kubenswrapper[4985]: I0127 08:54:06.986099 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.028018 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.060751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.060806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.060819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.060842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.060854 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.074457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.077771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.113491 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.148172 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.163731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.163793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.163806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.163829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.163842 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.204828 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.233267 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.239956 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.267361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.267463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.267496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.267584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.267613 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.292932 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.299429 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.353828 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.370906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.370973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.370997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.371028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.371052 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.394122 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.411272 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:12:52.48767538 +0000 UTC Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.427426 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.441961 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.473976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.474019 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.474029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.474045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.474056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.495724 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.538591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.577004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.577084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.577108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.577140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.577160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.585144 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.608250 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.652703 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.680983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.681034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.681044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.681063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.681074 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.689611 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.696693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.699458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.699451 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53" exitCode=0 Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.730197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.759060 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.784763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.784816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.784827 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.784847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.785145 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.792253 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.825363 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.873277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.887078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.887125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.887137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.887155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.887166 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.910256 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.951208 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.990486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.990567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.990577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.990600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.990611 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:07Z","lastTransitionTime":"2026-01-27T08:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:07 crc kubenswrapper[4985]: I0127 08:54:07.991158 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.032426 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.066910 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.094139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.094223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.094249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.094279 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.094299 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.106024 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.164192 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.198471 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.198528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.198541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.198561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.198573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.224872 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.266939 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.284738 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.301462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.301498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.301535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.301559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.301576 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.308160 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.347917 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.404699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.404752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.404767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.404787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.404803 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.412084 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:56:54.694021725 +0000 UTC Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.451806 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:08 crc kubenswrapper[4985]: E0127 08:54:08.451977 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.452439 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:08 crc kubenswrapper[4985]: E0127 08:54:08.452539 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.452579 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:08 crc kubenswrapper[4985]: E0127 08:54:08.452635 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.507855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.507904 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.507931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.507951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.507963 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.610658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.610727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.610745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.610779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.610802 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.709150 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5" exitCode=0 Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.709204 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.719482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.719546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.719560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.719580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.719593 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.730126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.747765 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.769670 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.784202 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.800651 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.823123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.823196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.823216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.823251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.823274 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.824988 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.856590 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.873433 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.889314 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.909944 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.925639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.925685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.925697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.925716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.925729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:08Z","lastTransitionTime":"2026-01-27T08:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.929617 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.948492 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.961750 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.977598 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:08 crc kubenswrapper[4985]: I0127 08:54:08.992708 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.028210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.028248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.028259 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.028278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.028287 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.131478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.131547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.131593 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.131617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.131631 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.235492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.235589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.235607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.235635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.235654 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.338718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.338790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.338808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.338840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.338862 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.413092 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:53:41.864350403 +0000 UTC Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.442004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.442065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.442084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.442112 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.442133 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.546340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.546411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.546426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.546449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.546944 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.650251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.650310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.650320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.650345 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.650358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.718575 4985 generic.go:334] "Generic (PLEG): container finished" podID="ea1cd1b8-a185-461d-9302-aa03be205225" containerID="bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96" exitCode=0 Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.718647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerDied","Data":"bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.740319 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.756483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.756621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.756647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.756682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.756713 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.762058 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.782181 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.802280 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.820992 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.847213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.860650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.860706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.860721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.860745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.860760 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.869473 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.893745 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.912041 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.929663 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.946264 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.960328 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.963804 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.963841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.963853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.963873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.963886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:09Z","lastTransitionTime":"2026-01-27T08:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:09 crc kubenswrapper[4985]: I0127 08:54:09.986447 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.000613 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.018010 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.067081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.067140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.067153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.067178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.067191 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.170744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.170806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.170820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.170857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.170877 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.275826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.275911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.275932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.275964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.275986 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.379341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.379649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.379667 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.379692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.379707 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.413590 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:43:12.563439896 +0000 UTC Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.451400 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.451487 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.451413 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:10 crc kubenswrapper[4985]: E0127 08:54:10.451603 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:10 crc kubenswrapper[4985]: E0127 08:54:10.451754 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:10 crc kubenswrapper[4985]: E0127 08:54:10.452088 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.483176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.483234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.483251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.483274 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.483290 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.585737 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.585774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.585783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.585798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.585812 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.688814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.688895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.688908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.688932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.688947 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.736964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" event={"ID":"ea1cd1b8-a185-461d-9302-aa03be205225","Type":"ContainerStarted","Data":"67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.742038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.742399 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.755818 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.771241 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.774958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.792010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.792354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.792431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.792542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.792630 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.803423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.815684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.831561 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.848980 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.863387 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.878819 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.895259 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.900619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.900675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.900685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.900699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.900709 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:10Z","lastTransitionTime":"2026-01-27T08:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.919061 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.932471 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.949284 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.966118 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.981203 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:10 crc kubenswrapper[4985]: I0127 08:54:10.997086 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.002900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.002948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.002960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.002977 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.002991 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.013051 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.026074 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.040634 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.055479 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.069225 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.093606 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.105996 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.106264 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.106342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.106356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.106372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.106382 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.118869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.133830 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.146994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.161292 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.172547 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.194889 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.205966 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.209067 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.209103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.209112 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.209128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.209142 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.230108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.312426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.312491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.312501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.312540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.312554 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.414379 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:17:03.49954099 +0000 UTC Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.415720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.415791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.415814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.415847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.415871 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.519426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.519493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.519536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.519562 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.519580 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.622789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.622851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.622865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.622887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.622902 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.725557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.725610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.725619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.725960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.725991 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.746048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.746101 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.776781 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.800610 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.827610 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.829948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.830036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.830074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.830117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.830138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.846858 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.864592 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.885068 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.902641 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.920247 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.933876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.933922 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.933933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.933951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.933963 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:11Z","lastTransitionTime":"2026-01-27T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.946106 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.965548 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:11 crc kubenswrapper[4985]: I0127 08:54:11.985286 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.002153 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.034720 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.036726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.036778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.036790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.036810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.036825 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.054563 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.075989 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.091070 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.140462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.140565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.140579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.140601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.140615 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.243684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.243725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.243736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.243753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.243764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.265094 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.265229 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.265258 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.265295 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:54:28.265263754 +0000 UTC m=+52.556358595 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.265342 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.265406 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:28.265388967 +0000 UTC m=+52.556483808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.265754 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.265802 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:28.265795381 +0000 UTC m=+52.556890212 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.347274 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.347843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.347858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.347884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.347895 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.366899 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.366952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367111 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367129 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367143 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367203 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367274 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367289 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367244 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:28.367198327 +0000 UTC m=+52.658293168 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.367358 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:28.367336792 +0000 UTC m=+52.658431633 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.415333 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:41:18.771450387 +0000 UTC Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451067 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451125 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451194 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.451268 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451347 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.451601 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.451771 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.451859 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.534414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.534463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.534472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.534491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.534503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.548088 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.552594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.552640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.552651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.552665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.552679 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.569873 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.574261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.574304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.574313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.574333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.574345 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.586794 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.590601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.590652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.590661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.590676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.590688 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.604926 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.609543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.609585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.609595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.609612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.609623 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.622799 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: E0127 08:54:12.622956 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.624576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.624614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.624647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.624668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.624681 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.728487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.728579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.728596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.728620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.728636 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.750897 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/0.log" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.754227 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad" exitCode=1 Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.754285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.755220 4985 scope.go:117] "RemoveContainer" containerID="bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.779138 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"minpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627307 6300 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627581 6300 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627610 6300 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627741 6300 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627937 6300 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 08:54:12.628200 6300 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 08:54:12.628793 6300 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.796999 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.814689 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.830905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.830959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.830971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.830989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.831002 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.832797 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.849417 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.863243 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.876914 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.891256 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.906863 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.921798 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.933458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.933495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.933508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.933540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.933554 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:12Z","lastTransitionTime":"2026-01-27T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.940754 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.963440 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.977366 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:12 crc kubenswrapper[4985]: I0127 08:54:12.993215 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.009040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.037011 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.037078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.037091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.037113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.037126 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.140659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.140723 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.140743 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.140769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.140787 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.243900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.243976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.243989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.244007 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.244020 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.346843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.346885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.346897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.346914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.346925 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.416438 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:43:19.428828379 +0000 UTC Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.449725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.449817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.449833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.449860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.449878 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.553032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.553073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.553084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.553101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.553115 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.655598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.655649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.655660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.655678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.655688 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.758446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.758528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.758544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.758565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.758583 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.760138 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/1.log" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.761233 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/0.log" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.764186 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40" exitCode=1 Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.764239 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.764290 4985 scope.go:117] "RemoveContainer" containerID="bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.765203 4985 scope.go:117] "RemoveContainer" containerID="b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40" Jan 27 08:54:13 crc kubenswrapper[4985]: E0127 08:54:13.765566 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.789057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.804741 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.826080 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.843457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.861199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.861248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.861261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.861283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.861296 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.864009 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.886994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.903748 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.923150 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.945045 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60f46389d2ed84459b7955befd2fa9f2c11e58cf46f8d5cba4d36c514cbbad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:12Z\\\",\\\"message\\\":\\\"minpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627307 6300 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627581 6300 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627610 6300 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627741 6300 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 08:54:12.627937 6300 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 08:54:12.628200 6300 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 08:54:12.628793 6300 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.959924 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.963828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.963881 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.963893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.963914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.963928 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:13Z","lastTransitionTime":"2026-01-27T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.975851 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:13 crc kubenswrapper[4985]: I0127 08:54:13.992797 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.008876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.025290 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.038323 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.066308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.066372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.066386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.066408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.066425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.169548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.169792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.169806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.169829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.169848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.272979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.273038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.273054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.273077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.273092 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.376151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.376204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.376217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.376237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.376249 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.416672 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:50:43.460917998 +0000 UTC Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.451113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.451191 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.451296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:14 crc kubenswrapper[4985]: E0127 08:54:14.451292 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:14 crc kubenswrapper[4985]: E0127 08:54:14.451367 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:14 crc kubenswrapper[4985]: E0127 08:54:14.451425 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.479818 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.479873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.479885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.479904 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.479917 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.582476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.582540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.582553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.582573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.582585 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.685203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.685256 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.685266 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.685284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.685294 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.783750 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/1.log" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.787297 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.787332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.787341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.787358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.787369 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.788084 4985 scope.go:117] "RemoveContainer" containerID="b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40" Jan 27 08:54:14 crc kubenswrapper[4985]: E0127 08:54:14.788274 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.802938 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.820067 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.838308 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.857012 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.872812 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.890101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.890171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.890186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.890205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.890219 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.895958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.910950 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.928316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.944197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.961430 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.978765 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993634 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:14 crc kubenswrapper[4985]: I0127 08:54:14.993926 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:14Z","lastTransitionTime":"2026-01-27T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.023163 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.034902 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.053115 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.096686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.096743 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.096757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.096783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.096802 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.198965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.199016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.199029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.199048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.199058 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.302578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.302661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.302684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.302731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.302756 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.406177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.406236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.406249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.406272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.406286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.417815 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:29:28.911176746 +0000 UTC Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.509548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.509609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.509626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.509654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.509673 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.613235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.613307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.613330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.613362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.613395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.717481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.717567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.717583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.717609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.717626 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.768237 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq"] Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.768801 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.771848 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.772946 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.792684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.804598 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5xp\" (UniqueName: \"kubernetes.io/projected/ad423d26-ea00-4a86-8eed-bba6433ce382-kube-api-access-hg5xp\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.804695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.804734 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.804808 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad423d26-ea00-4a86-8eed-bba6433ce382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.816645 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.820759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.820794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.820806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.820826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.820842 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.835925 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.852152 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.876474 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.894277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.906363 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg5xp\" (UniqueName: \"kubernetes.io/projected/ad423d26-ea00-4a86-8eed-bba6433ce382-kube-api-access-hg5xp\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.906438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad423d26-ea00-4a86-8eed-bba6433ce382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.906465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.906489 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.907219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.907897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad423d26-ea00-4a86-8eed-bba6433ce382-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.911532 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.913931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad423d26-ea00-4a86-8eed-bba6433ce382-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.924254 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.924333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.924356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.924385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.924405 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:15Z","lastTransitionTime":"2026-01-27T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.927051 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.927754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg5xp\" (UniqueName: \"kubernetes.io/projected/ad423d26-ea00-4a86-8eed-bba6433ce382-kube-api-access-hg5xp\") pod \"ovnkube-control-plane-749d76644c-s74hq\" (UID: \"ad423d26-ea00-4a86-8eed-bba6433ce382\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.941847 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.955277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.969899 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:15 crc kubenswrapper[4985]: I0127 08:54:15.988261 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:15.999959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.018035 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.027506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.027563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.027575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.027591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.027602 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.034767 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.057917 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.083248 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.131203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.131287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.131310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.131340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.131361 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.235271 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.235764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.235780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.235806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.235821 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.338363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.338424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.338437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.338454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.338465 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.418481 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:46:33.238270296 +0000 UTC Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.441861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.441917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.441929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.441954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.441966 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.451107 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.451112 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.451405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:16 crc kubenswrapper[4985]: E0127 08:54:16.451665 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:16 crc kubenswrapper[4985]: E0127 08:54:16.451864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:16 crc kubenswrapper[4985]: E0127 08:54:16.452006 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.464658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.479026 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.493726 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.514430 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.533936 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.545162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.545232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.545263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.545291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.545310 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.568940 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.593785 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.618805 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.638390 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.647953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.648016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.648034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.648059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.648073 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.650428 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.661967 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.673651 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.693804 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.707605 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.726093 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.744081 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.751359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.751423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.751437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.751461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.751474 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.796669 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" event={"ID":"ad423d26-ea00-4a86-8eed-bba6433ce382","Type":"ContainerStarted","Data":"42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.796751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" event={"ID":"ad423d26-ea00-4a86-8eed-bba6433ce382","Type":"ContainerStarted","Data":"031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.796762 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" event={"ID":"ad423d26-ea00-4a86-8eed-bba6433ce382","Type":"ContainerStarted","Data":"1cc6396c0654d750f8df88bdca0a06028275bdc015883f72aa2ea637784e46ab"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.812725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.829148 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.844712 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.853933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.853971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.853982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.853996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.854006 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.865437 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.878084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.895795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.910242 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.928196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.943957 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.956658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.956724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.956744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.956773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.956793 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:16Z","lastTransitionTime":"2026-01-27T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.962141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.979741 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:16 crc kubenswrapper[4985]: I0127 08:54:16.992958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.008876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.039961 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.053234 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.059198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.059231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.059243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.059260 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.059273 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.069046 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.161834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.161924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.161949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.161982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.162006 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.265926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.266021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.266048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.266084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.266108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.286188 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cscdv"] Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.287002 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: E0127 08:54:17.287103 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.307932 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.323179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.323291 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k24x6\" (UniqueName: \"kubernetes.io/projected/5c870945-eecc-4954-a91b-d02cef8f98e2-kube-api-access-k24x6\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.326215 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.344771 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.365954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.369115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.369201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.369222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.369248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.369268 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.386019 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.400977 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.418406 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.418747 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:47:23.739684219 +0000 UTC Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.424641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.424688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k24x6\" (UniqueName: \"kubernetes.io/projected/5c870945-eecc-4954-a91b-d02cef8f98e2-kube-api-access-k24x6\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: E0127 08:54:17.424888 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:17 crc kubenswrapper[4985]: E0127 08:54:17.425049 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:17.925014996 +0000 UTC m=+42.216109877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.435445 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.448207 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k24x6\" (UniqueName: \"kubernetes.io/projected/5c870945-eecc-4954-a91b-d02cef8f98e2-kube-api-access-k24x6\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.469759 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.472583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.472640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.472653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.472672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.472687 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.487130 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.503359 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.518474 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.538651 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.551046 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.570244 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.575302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.575378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.575393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.575413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.575427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.590207 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.613630 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.677857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.677897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.677906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.677926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.677936 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.781161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.781218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.781231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.781251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.781264 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.884433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.884878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.884978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.885092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.885193 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.931307 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:17 crc kubenswrapper[4985]: E0127 08:54:17.931655 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:17 crc kubenswrapper[4985]: E0127 08:54:17.931893 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:18.931867657 +0000 UTC m=+43.222962678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.988416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.989131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.989194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.989283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:17 crc kubenswrapper[4985]: I0127 08:54:17.989372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:17Z","lastTransitionTime":"2026-01-27T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.092484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.092569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.092586 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.092608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.092622 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.195804 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.195865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.195877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.195898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.195912 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.298868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.298930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.298947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.298970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.299016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.401265 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.401315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.401324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.401342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.401352 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.419904 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:00:51.295904554 +0000 UTC Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.451883 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.451982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.452065 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.452127 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.452195 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.452003 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.452357 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.452439 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.504476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.504545 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.504558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.504577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.504590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.607327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.607391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.607404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.607428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.607443 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.710727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.710786 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.710798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.710817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.710828 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.812902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.812970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.813029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.813057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.813075 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.917156 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.917229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.917253 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.917284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.917308 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:18Z","lastTransitionTime":"2026-01-27T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:18 crc kubenswrapper[4985]: I0127 08:54:18.944399 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.944720 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:18 crc kubenswrapper[4985]: E0127 08:54:18.944864 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:20.944831295 +0000 UTC m=+45.235926166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.020287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.020360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.020377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.020399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.020416 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.123911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.123972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.123982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.124000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.124012 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.226677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.226750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.226762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.226781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.226795 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.328948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.328989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.328998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.329013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.329023 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.420775 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:49:10.498361492 +0000 UTC Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.431213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.431270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.431282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.431303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.431346 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.534673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.534727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.534742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.534765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.534780 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.638393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.638910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.639009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.639109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.639301 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.742726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.742809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.742837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.742867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.742886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.846583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.846648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.846661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.846682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.846695 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.951771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.951859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.951876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.951905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:19 crc kubenswrapper[4985]: I0127 08:54:19.951930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:19Z","lastTransitionTime":"2026-01-27T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.055145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.055201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.055216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.055242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.055257 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.158308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.158361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.158371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.158387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.158400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.262158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.262234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.262252 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.262282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.262359 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.365100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.365147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.365155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.365172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.365183 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.421652 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:18:10.894297409 +0000 UTC Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.451061 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.451096 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.451056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.451113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.451232 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.451406 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.451580 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.451650 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.467656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.467704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.467714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.467733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.467744 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.571259 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.571333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.571353 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.571380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.571434 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.673905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.673965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.673983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.674010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.674031 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.777999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.778114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.778129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.778158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.778185 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.881392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.881462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.881480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.881505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.881562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.968597 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.968832 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:20 crc kubenswrapper[4985]: E0127 08:54:20.968916 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:24.968888043 +0000 UTC m=+49.259982894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.983790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.983832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.983840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.983856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:20 crc kubenswrapper[4985]: I0127 08:54:20.983868 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:20Z","lastTransitionTime":"2026-01-27T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.087435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.087498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.087533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.087556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.087621 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.191117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.191174 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.191186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.191205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.191218 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.294529 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.294578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.294588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.294606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.294617 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.397239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.397295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.397306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.397323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.397334 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.422804 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:23:32.66235507 +0000 UTC Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.500952 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.501012 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.501024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.501047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.501061 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.603361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.603433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.603447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.603475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.603491 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.706638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.706698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.706712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.706733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.706747 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.809836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.809930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.809949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.809981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.810001 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.914061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.914123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.914135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.914156 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:21 crc kubenswrapper[4985]: I0127 08:54:21.914169 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:21Z","lastTransitionTime":"2026-01-27T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.018014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.018106 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.018122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.018179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.018197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.121893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.121968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.121987 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.122015 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.122035 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.225109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.225157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.225166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.225181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.225190 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.328285 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.328373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.328398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.328427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.328447 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.423134 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:58:03.903594371 +0000 UTC Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.431854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.431909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.431923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.431944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.431958 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.451560 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.451674 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.451568 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.451762 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.451677 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.451865 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.451958 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.452125 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.535399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.535465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.535479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.535502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.535549 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.638387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.638439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.638452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.638472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.638486 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.741379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.741454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.741473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.741563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.741633 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.843751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.843812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.843824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.843846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.843863 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.848303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.848346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.848358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.848379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.848392 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.860396 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.863823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.863902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.863913 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.863936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.863947 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.876120 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.879877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.879916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.879932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.879955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.879971 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.893298 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.896839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.896882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.896890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.896907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.896917 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.909800 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.913557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.913618 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.913628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.913642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.913677 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.930897 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:22 crc kubenswrapper[4985]: E0127 08:54:22.931075 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.947304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.947393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.947411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.947432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:22 crc kubenswrapper[4985]: I0127 08:54:22.947447 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:22Z","lastTransitionTime":"2026-01-27T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.049460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.049556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.049573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.049594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.049607 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.152328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.152394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.152407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.152429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.152444 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.255408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.255457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.255467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.255483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.255497 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.359035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.359107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.359124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.359151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.359170 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.423851 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:37:08.99049716 +0000 UTC Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.461743 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.461799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.461813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.461833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.461847 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.564762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.564839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.564854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.564879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.564896 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.668126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.668183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.668217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.668240 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.668255 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.771351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.771400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.771412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.771453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.771465 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.875022 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.875112 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.875133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.875159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.875181 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.978977 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.979042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.979060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.979081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:23 crc kubenswrapper[4985]: I0127 08:54:23.979097 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:23Z","lastTransitionTime":"2026-01-27T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.082283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.082346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.082359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.082381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.082396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.185186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.185251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.185262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.185281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.185298 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.288562 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.288617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.288626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.288644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.288655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.391332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.391452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.391465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.391480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.391501 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.425031 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:19:35.243257071 +0000 UTC Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.451532 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.451604 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.451660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:24 crc kubenswrapper[4985]: E0127 08:54:24.451745 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.451803 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:24 crc kubenswrapper[4985]: E0127 08:54:24.451944 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:24 crc kubenswrapper[4985]: E0127 08:54:24.452001 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:24 crc kubenswrapper[4985]: E0127 08:54:24.452086 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.494797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.494850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.494864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.494882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.494895 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.597938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.598007 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.598021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.598044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.598059 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.700328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.700390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.700403 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.700426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.700441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.803738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.803785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.803794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.803809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.803819 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.906930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.906980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.906991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.907009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:24 crc kubenswrapper[4985]: I0127 08:54:24.907021 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:24Z","lastTransitionTime":"2026-01-27T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.009534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.009603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.009622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.009647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.009669 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.016200 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:25 crc kubenswrapper[4985]: E0127 08:54:25.016433 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:25 crc kubenswrapper[4985]: E0127 08:54:25.016548 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:33.016487935 +0000 UTC m=+57.307582796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.111853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.111933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.111945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.111964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.111977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.215297 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.215393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.215418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.215463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.215497 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.318077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.318118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.318129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.318151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.318165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.422244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.422318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.422328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.422350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.422361 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.425465 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:27:01.405402189 +0000 UTC Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.524920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.524979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.525005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.525037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.525056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.628208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.628255 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.628265 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.628282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.628292 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.730964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.731003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.731012 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.731028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.731039 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.833325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.833376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.833385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.833402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.833413 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.936988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.937042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.937053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.937074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:25 crc kubenswrapper[4985]: I0127 08:54:25.937087 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:25Z","lastTransitionTime":"2026-01-27T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.040239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.040292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.040305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.040324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.040341 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.143599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.143643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.143652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.143667 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.143681 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.246488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.246592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.246631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.246659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.246674 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.351389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.351475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.351487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.351547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.351564 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.425875 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:42:11.986634568 +0000 UTC Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.451717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.451787 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.451731 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.452031 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:26 crc kubenswrapper[4985]: E0127 08:54:26.452190 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.452330 4985 scope.go:117] "RemoveContainer" containerID="b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40" Jan 27 08:54:26 crc kubenswrapper[4985]: E0127 08:54:26.452610 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:26 crc kubenswrapper[4985]: E0127 08:54:26.452883 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:26 crc kubenswrapper[4985]: E0127 08:54:26.453019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.454895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.454957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.454985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.455018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.455041 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.479755 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.504639 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.531082 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.552814 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.557333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.557384 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.557400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.557421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.557435 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.568331 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.588792 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.604132 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.618290 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.633671 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.647292 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.663853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.664002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.664018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.664402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.664590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.673374 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.689821 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.710500 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.724022 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.736538 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.747338 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.768295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.768352 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.768363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.768381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.768395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.770135 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.839500 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/1.log" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.843146 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.843684 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.858365 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.871212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.871253 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.871261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.871282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.871293 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.872316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.886616 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.900306 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.912763 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.935376 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.952451 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.965692 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.973461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.973528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.973541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.973562 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.973576 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:26Z","lastTransitionTime":"2026-01-27T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.976829 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:26 crc kubenswrapper[4985]: I0127 08:54:26.990346 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.015751 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.029779 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.043066 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.057492 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.076776 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.076841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.076856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.076878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.076894 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.081904 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.095065 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.125832 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.180307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.180378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.180392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.180417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.180432 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.283393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.283467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.283481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.283503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.283551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.386437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.386492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.386503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.386538 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.386551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.426571 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:49:42.908324755 +0000 UTC Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.489184 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.489230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.489244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.489264 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.489278 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.592498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.592577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.592590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.592609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.592623 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.695545 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.695622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.695640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.695663 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.695678 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.799206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.799274 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.799292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.799320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.799340 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.850175 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/2.log" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.851210 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/1.log" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.855911 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" exitCode=1 Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.855973 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.856023 4985 scope.go:117] "RemoveContainer" containerID="b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.857647 4985 scope.go:117] "RemoveContainer" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" Jan 27 08:54:27 crc kubenswrapper[4985]: E0127 08:54:27.858039 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.890328 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.902119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.902160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.902171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.902187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.902199 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:27Z","lastTransitionTime":"2026-01-27T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.907028 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.926839 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.945785 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.963277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.979679 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:27 crc kubenswrapper[4985]: I0127 08:54:27.998477 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:27Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.004612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.004656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.004667 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.004685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.004700 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.016624 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.032910 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.057621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1516da9c2886f2df2975067fa6c3d9406f6ee8cf3adb921b844d3c0a6b3cc40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 08:54:13.671481 6426 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 08:54:13.671555 6426 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 08:54:13.671576 6426 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:13.671583 6426 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:13.671598 6426 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 08:54:13.671610 6426 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 08:54:13.671623 6426 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:13.671622 6426 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 08:54:13.671695 6426 factory.go:656] Stopping watch factory\\\\nI0127 08:54:13.671733 6426 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:13.671748 6426 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 08:54:13.671762 6426 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:13.671770 6426 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:13.671779 6426 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:13.671787 6426 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.071362 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.084881 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.103585 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.107309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.107368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.107382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.107415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.107436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.119047 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.135391 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.153991 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.165046 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.210574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.210618 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.210631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.210648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.210662 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.314164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.314227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.314243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.314270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.314286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.353660 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.353876 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.353908 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.353999 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.354020 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:55:00.353948429 +0000 UTC m=+84.645043310 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.354090 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:55:00.354069443 +0000 UTC m=+84.645164534 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.354165 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.354300 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:55:00.35427296 +0000 UTC m=+84.645367811 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.417946 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.417996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.418014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.418039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.418058 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.426972 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:28:33.021538285 +0000 UTC Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.451457 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.451480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.451559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.451484 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.451621 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.451800 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.451922 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.452068 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.454926 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.454966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455106 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455127 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455108 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455173 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455194 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455138 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455262 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:55:00.455237593 +0000 UTC m=+84.746332464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.455289 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:55:00.455277464 +0000 UTC m=+84.746372345 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.521337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.521415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.521438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.521472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.521496 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.631969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.632065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.632090 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.632123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.632148 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.735560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.735649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.735691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.735726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.735776 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.838583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.838654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.838670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.838717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.838733 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.862432 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/2.log" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.866666 4985 scope.go:117] "RemoveContainer" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" Jan 27 08:54:28 crc kubenswrapper[4985]: E0127 08:54:28.866856 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.883783 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.897546 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.915764 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.931258 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.942371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.942435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.942446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.942467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.942483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:28Z","lastTransitionTime":"2026-01-27T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.954344 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.970946 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:28 crc kubenswrapper[4985]: I0127 08:54:28.991662 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.009364 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.031668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.045600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.045715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.045734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.045759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.045775 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.050134 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.066411 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.082211 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.096449 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.113123 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.132429 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.137683 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.147432 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.148115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.148159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.148173 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.148192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.148203 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.153968 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.182790 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.199768 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.217917 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.231308 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.246804 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.251459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.251556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.251578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.251625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.251642 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.262004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.280977 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.295013 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.312665 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.324937 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.345041 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.354215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.354258 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.354269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.354287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.354299 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.361982 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.378221 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.395158 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.413755 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.427250 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:41:40.211376716 +0000 UTC Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.441225 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.456995 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.457570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.457623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.457644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.457672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.457693 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.478946 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.499901 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:29Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.560651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.560718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.560737 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.560768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.560786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.663864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.663926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.663943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.663969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.663987 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.766707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.766778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.766800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.766830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.766854 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.869641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.869700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.869713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.869731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.869744 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.972806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.972883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.972905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.972926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:29 crc kubenswrapper[4985]: I0127 08:54:29.972964 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:29Z","lastTransitionTime":"2026-01-27T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.076955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.077036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.077056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.077085 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.077108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.180851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.180926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.180946 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.180974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.180998 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.283918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.284178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.284213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.284247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.284276 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.387726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.387808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.387826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.387854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.387874 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.428127 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:20:30.964607556 +0000 UTC Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.451913 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.451943 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.451947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.451982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:30 crc kubenswrapper[4985]: E0127 08:54:30.452202 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:30 crc kubenswrapper[4985]: E0127 08:54:30.452338 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:30 crc kubenswrapper[4985]: E0127 08:54:30.452463 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:30 crc kubenswrapper[4985]: E0127 08:54:30.452600 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.491327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.491405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.491432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.491464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.491490 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.594013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.594088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.594108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.594134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.594155 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.697100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.697158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.697176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.697200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.697219 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.800491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.800580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.800617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.800637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.800647 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.903620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.903688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.903713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.903752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:30 crc kubenswrapper[4985]: I0127 08:54:30.903777 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:30Z","lastTransitionTime":"2026-01-27T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.007143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.007203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.007221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.007238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.007249 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.110787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.110851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.110864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.110886 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.110899 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.213832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.213906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.213929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.213954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.213974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.316351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.316410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.316431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.316461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.316484 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.420332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.420394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.420408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.420431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.420453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.429162 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:21:18.084060273 +0000 UTC Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.524313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.524414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.524458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.524484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.524505 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.628197 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.628295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.628316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.628868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.629103 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.733756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.733831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.733845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.733865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.733878 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.836419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.836478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.836492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.836543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.836563 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.940093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.940160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.940182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.940238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:31 crc kubenswrapper[4985]: I0127 08:54:31.940271 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:31Z","lastTransitionTime":"2026-01-27T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.043397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.043471 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.043505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.043552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.043567 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.146696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.146795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.146830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.146864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.146901 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.250377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.250434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.250447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.250467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.250479 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.353704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.353791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.353813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.353843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.353867 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.430289 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:23:03.788146566 +0000 UTC Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.451924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.452050 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.452063 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.452168 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.452364 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.452396 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.452602 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.452742 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.458546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.458611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.458635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.458660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.458679 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.562158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.562318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.562343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.562440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.562470 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.665853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.665921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.665941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.665969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.665987 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.769615 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.769674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.769691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.769714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.769728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.873250 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.873310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.873330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.873355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.873376 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.935829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.935901 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.935925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.935961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.935988 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.958018 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.964055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.964130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.964158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.964189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.964213 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:32 crc kubenswrapper[4985]: E0127 08:54:32.989591 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.995568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.995651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.995677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.995839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:32 crc kubenswrapper[4985]: I0127 08:54:32.995880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:32Z","lastTransitionTime":"2026-01-27T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.016166 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.021443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.021491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.021504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.021551 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.021569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.043929 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.049804 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.049868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.049894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.049926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.049953 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.072936 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.073164 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.075824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.075889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.075906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.075932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.075952 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.112560 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.112779 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:33 crc kubenswrapper[4985]: E0127 08:54:33.112851 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:54:49.112832576 +0000 UTC m=+73.403927427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.179398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.179451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.179472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.179504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.179764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.283366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.283461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.283481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.283563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.283596 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.387275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.387342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.387359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.387385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.387403 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.431088 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:39:42.033992078 +0000 UTC Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.490965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.491012 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.491030 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.491058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.491075 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.594982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.595063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.595089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.595121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.595147 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.698689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.698735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.698749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.698771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.698785 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.802600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.802658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.802675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.802702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.802723 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.906172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.906246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.906264 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.906295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:33 crc kubenswrapper[4985]: I0127 08:54:33.906315 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:33Z","lastTransitionTime":"2026-01-27T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.009672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.009728 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.009742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.009761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.009772 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.113677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.113737 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.113751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.113772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.113784 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.217634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.217710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.217727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.217756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.217775 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.321625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.321738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.321758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.321788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.321809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.425348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.425405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.425423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.425451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.425471 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.431552 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:18:44.128672316 +0000 UTC Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.451369 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.451464 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.451480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.451380 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:34 crc kubenswrapper[4985]: E0127 08:54:34.451643 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:34 crc kubenswrapper[4985]: E0127 08:54:34.451825 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:34 crc kubenswrapper[4985]: E0127 08:54:34.451940 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:34 crc kubenswrapper[4985]: E0127 08:54:34.452136 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.531739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.531796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.531809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.531830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.531842 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.635051 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.635105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.635116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.635137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.635150 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.738263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.738359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.738371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.738391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.738405 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.840837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.840909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.840952 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.840989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.841016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.944312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.944366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.944381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.944403 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:34 crc kubenswrapper[4985]: I0127 08:54:34.944417 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:34Z","lastTransitionTime":"2026-01-27T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.047307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.047366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.047378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.047401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.047414 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.150900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.150974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.150994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.151104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.151124 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.254629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.254687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.254697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.254714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.254725 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.358011 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.358072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.358092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.358118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.358137 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.431938 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:04:06.068631241 +0000 UTC Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.461038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.461099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.461117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.461178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.461204 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.564822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.564889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.564906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.564931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.564949 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.668681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.668761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.668775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.668795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.668810 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.772345 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.772404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.772417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.772443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.772463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.875928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.876000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.876013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.876034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.876048 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.978937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.979000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.979010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.979028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:35 crc kubenswrapper[4985]: I0127 08:54:35.979040 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:35Z","lastTransitionTime":"2026-01-27T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.084171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.084236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.084249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.084273 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.084296 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.187376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.187786 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.187855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.187922 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.188014 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.290695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.291148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.291237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.291335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.291400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.394543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.394610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.394627 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.394652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.394670 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.433169 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:55:37.948944125 +0000 UTC Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.451488 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:36 crc kubenswrapper[4985]: E0127 08:54:36.451987 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.451640 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.451610 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.451649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:36 crc kubenswrapper[4985]: E0127 08:54:36.452739 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:36 crc kubenswrapper[4985]: E0127 08:54:36.452949 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:36 crc kubenswrapper[4985]: E0127 08:54:36.453028 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.467436 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.489447 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.497746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.497808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.497821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.497842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.497860 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.509912 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.532498 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.552200 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.579402 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.595464 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.603695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.604478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.604558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.604588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.604604 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.614138 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.630330 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.645351 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.659594 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.671101 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.683116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.707160 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.708021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.708064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.708075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.708091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.708102 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.723896 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.738217 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.766179 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.780910 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.811665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.811739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.811761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.812178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.812238 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.915941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.916014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.916033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.916062 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:36 crc kubenswrapper[4985]: I0127 08:54:36.916082 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:36Z","lastTransitionTime":"2026-01-27T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.019409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.019482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.019499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.019556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.019577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.122270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.122324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.122338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.122359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.122373 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.225878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.225970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.226002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.226038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.226062 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.329149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.329220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.329244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.329279 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.329304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.432262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.432334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.432357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.432389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.432411 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.434480 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:32:05.417864232 +0000 UTC Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.536048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.536104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.536122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.536148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.536168 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.639724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.639772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.639784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.639816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.639829 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.743263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.743342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.743370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.743402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.743423 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.847121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.847176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.847188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.847209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.847221 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.950013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.950064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.950076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.950096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:37 crc kubenswrapper[4985]: I0127 08:54:37.950109 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:37Z","lastTransitionTime":"2026-01-27T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.052999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.053123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.053145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.053173 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.053195 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.156325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.156402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.156419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.156440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.156455 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.259998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.260076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.260088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.260105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.260114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.364784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.364899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.364929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.364967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.365006 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.435312 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:12:41.417419487 +0000 UTC Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.451806 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.451874 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.451912 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.451806 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:38 crc kubenswrapper[4985]: E0127 08:54:38.452026 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:38 crc kubenswrapper[4985]: E0127 08:54:38.452148 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:38 crc kubenswrapper[4985]: E0127 08:54:38.452309 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:38 crc kubenswrapper[4985]: E0127 08:54:38.452425 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.467801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.467875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.467890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.467908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.467920 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.571670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.571734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.571754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.571781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.571800 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.675813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.675881 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.675902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.675929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.675947 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.780559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.780601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.780612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.780633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.780643 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.884394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.884449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.884460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.884479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.884493 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.987271 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.987352 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.987366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.987386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:38 crc kubenswrapper[4985]: I0127 08:54:38.987401 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:38Z","lastTransitionTime":"2026-01-27T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.091724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.091771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.091784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.091801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.091813 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.194980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.195045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.195062 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.195085 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.195100 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.300109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.300199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.300253 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.300281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.300334 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.403572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.403630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.403644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.403681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.403694 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.436006 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:20:54.874872394 +0000 UTC Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.506109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.506179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.506190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.506225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.506241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.609657 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.609717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.609733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.609759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.609780 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.713226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.713279 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.713294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.713315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.713332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.815795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.815857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.815867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.815890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.815901 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.919186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.919239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.919252 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.919273 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:39 crc kubenswrapper[4985]: I0127 08:54:39.919288 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:39Z","lastTransitionTime":"2026-01-27T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.022846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.022923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.022943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.022973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.022995 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.125748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.125832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.125870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.125903 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.125937 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.228128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.228162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.228172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.228187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.228199 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.332309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.332361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.332372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.332389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.332405 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.435850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.435920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.435929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.435955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.435966 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.436131 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:30:50.729367032 +0000 UTC Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.451868 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.451906 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.451940 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.452004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:40 crc kubenswrapper[4985]: E0127 08:54:40.452379 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:40 crc kubenswrapper[4985]: E0127 08:54:40.452844 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:40 crc kubenswrapper[4985]: E0127 08:54:40.452914 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:40 crc kubenswrapper[4985]: E0127 08:54:40.452769 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.539343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.539404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.539419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.539440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.539453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.642809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.642871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.642889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.642916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.642935 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.745633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.745675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.745685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.745705 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.745724 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.848248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.848309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.848321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.848342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.848354 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.951268 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.951321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.951334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.951350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:40 crc kubenswrapper[4985]: I0127 08:54:40.951362 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:40Z","lastTransitionTime":"2026-01-27T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.054063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.054116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.054130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.054150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.054165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.157129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.157192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.157205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.157225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.157237 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.259816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.259885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.259896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.259913 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.259926 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.362252 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.362304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.362314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.362330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.362340 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.436493 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:55:09.324835128 +0000 UTC Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.464578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.464685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.464710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.464745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.464767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.567144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.567187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.567198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.567216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.567228 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.670621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.670675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.670686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.670705 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.670717 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.774203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.774258 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.774269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.774286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.774299 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.878154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.878211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.878228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.878255 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.878273 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.982143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.982221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.982239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.982261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:41 crc kubenswrapper[4985]: I0127 08:54:41.982279 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:41Z","lastTransitionTime":"2026-01-27T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.085328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.085401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.085419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.085447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.085467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.188838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.188892 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.188902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.188923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.188936 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.292324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.292401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.292412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.292429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.292442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.394764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.394815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.394829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.394849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.394864 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.437646 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:04:40.210888084 +0000 UTC Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.451070 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.451130 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:42 crc kubenswrapper[4985]: E0127 08:54:42.451252 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.451106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.451441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:42 crc kubenswrapper[4985]: E0127 08:54:42.451435 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:42 crc kubenswrapper[4985]: E0127 08:54:42.451491 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:42 crc kubenswrapper[4985]: E0127 08:54:42.451556 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.498461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.498563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.498578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.498597 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.498609 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.601222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.601269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.601283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.601307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.601322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.704476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.704542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.704554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.704571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.704581 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.807640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.807684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.807693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.807709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.807719 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.910891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.910950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.910966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.910988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:42 crc kubenswrapper[4985]: I0127 08:54:42.911002 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:42Z","lastTransitionTime":"2026-01-27T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.013851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.013909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.013920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.013939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.013952 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.117209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.117272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.117286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.117317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.117329 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.191148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.191182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.191192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.191216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.191232 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.209800 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:43Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.215261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.215334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.215354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.215381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.215400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.235951 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:43Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.240997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.241074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.241093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.241133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.241152 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.264498 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:43Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.269690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.269752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.269766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.269784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.269794 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.292240 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:43Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.297050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.297093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.297126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.297149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.297162 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.311018 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:43Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.311187 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.313310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.313350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.313362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.313377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.313388 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.416108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.416183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.416219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.416237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.416246 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.438047 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:13:37.868808196 +0000 UTC Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.452483 4985 scope.go:117] "RemoveContainer" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" Jan 27 08:54:43 crc kubenswrapper[4985]: E0127 08:54:43.452752 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.519001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.519086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.519097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.519119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.519132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.622014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.622070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.622082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.622103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.622116 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.724308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.724354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.724364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.724382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.724395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.827137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.827177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.827186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.827202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.827212 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.929714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.929763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.929775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.929793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:43 crc kubenswrapper[4985]: I0127 08:54:43.929806 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:43Z","lastTransitionTime":"2026-01-27T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.032317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.032413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.032433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.032461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.032480 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.135556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.135596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.135607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.135649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.135663 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.238117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.238171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.238182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.238202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.238213 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.340912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.340959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.340972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.340990 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.341003 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.438926 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:24:35.850033073 +0000 UTC Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.444387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.444428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.444447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.444472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.444486 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.452042 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.452142 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.452087 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:44 crc kubenswrapper[4985]: E0127 08:54:44.452305 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:44 crc kubenswrapper[4985]: E0127 08:54:44.452433 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.452556 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:44 crc kubenswrapper[4985]: E0127 08:54:44.452731 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:44 crc kubenswrapper[4985]: E0127 08:54:44.452764 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.547460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.547531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.547540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.547559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.547568 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.650444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.650520 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.650533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.650552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.650564 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.753219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.753751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.753851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.753950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.754083 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.857228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.857653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.857766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.857879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.857971 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.960352 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.960766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.960848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.960918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:44 crc kubenswrapper[4985]: I0127 08:54:44.960986 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:44Z","lastTransitionTime":"2026-01-27T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.063113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.063163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.063175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.063197 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.063210 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.166297 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.166344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.166357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.166381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.166395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.270213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.270584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.270705 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.270788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.271070 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.374396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.374454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.374468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.374493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.374506 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.439767 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:05:07.635330153 +0000 UTC Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.477779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.477838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.477857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.477880 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.477900 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.580312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.580359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.580368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.580387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.580398 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.682814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.682870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.682885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.682907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.682931 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.788225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.788295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.788317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.788341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.788354 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.891558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.891662 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.891671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.891718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.891731 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.994574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.994631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.994645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.994671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:45 crc kubenswrapper[4985]: I0127 08:54:45.994685 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:45Z","lastTransitionTime":"2026-01-27T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.104632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.104694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.104706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.104726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.104738 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.207442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.207491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.207500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.207534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.207545 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.310986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.311032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.311042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.311061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.311072 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.414111 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.414189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.414219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.414238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.414251 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.440643 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:00:27.647781347 +0000 UTC Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.450973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.451031 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.450973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:46 crc kubenswrapper[4985]: E0127 08:54:46.451135 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:46 crc kubenswrapper[4985]: E0127 08:54:46.451211 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:46 crc kubenswrapper[4985]: E0127 08:54:46.451296 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.451454 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:46 crc kubenswrapper[4985]: E0127 08:54:46.451538 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.476446 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.488887 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.503371 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.517418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.517780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.517864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.517938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.518013 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.523116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.539427 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.555846 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.570785 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.585687 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.603668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.621031 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.622920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.622961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.622972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.622990 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.623001 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.646115 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.660927 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.697940 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.723842 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.725413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.725455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.725467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.725486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.725497 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.743502 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.759443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.774044 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.792730 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.828423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.828468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.828499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.828543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.828557 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.931262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.931332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.931347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.931371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:46 crc kubenswrapper[4985]: I0127 08:54:46.931391 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:46Z","lastTransitionTime":"2026-01-27T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.071269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.071329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.071341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.071363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.071376 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.174224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.174280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.174291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.174310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.174322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.278044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.278097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.278107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.278128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.278138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.380778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.380853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.380865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.380887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.380899 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.441690 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:06:09.624885299 +0000 UTC Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.483763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.483830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.483843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.483863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.483878 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.587704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.587754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.587765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.587785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.587798 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.692048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.692110 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.692129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.692157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.692174 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.796266 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.796318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.796330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.796363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.796377 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.898469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.898530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.898544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.898562 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:47 crc kubenswrapper[4985]: I0127 08:54:47.898574 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:47Z","lastTransitionTime":"2026-01-27T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.001200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.001270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.001284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.001304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.001320 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.104081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.104123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.104135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.104153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.104165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.206915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.207152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.207171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.207202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.207219 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.310401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.310454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.310467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.310485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.310496 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.413581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.413637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.413653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.413673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.413686 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.442064 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:11:10.446352502 +0000 UTC Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.451583 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.451734 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.451797 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.452592 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:48 crc kubenswrapper[4985]: E0127 08:54:48.451802 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:48 crc kubenswrapper[4985]: E0127 08:54:48.453348 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:48 crc kubenswrapper[4985]: E0127 08:54:48.454197 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:48 crc kubenswrapper[4985]: E0127 08:54:48.454408 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.516669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.516717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.516727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.516746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.516761 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.619898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.619938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.619951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.619969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.619980 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.723185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.723230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.723245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.723264 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.723277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.826078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.826153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.826177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.826209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.826234 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.929249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.929331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.929343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.929368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:48 crc kubenswrapper[4985]: I0127 08:54:48.929383 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:48Z","lastTransitionTime":"2026-01-27T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.032425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.032474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.032489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.032535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.032558 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.135397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.135443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.135457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.135475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.135488 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.206787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:49 crc kubenswrapper[4985]: E0127 08:54:49.207120 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:49 crc kubenswrapper[4985]: E0127 08:54:49.207204 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:55:21.207183002 +0000 UTC m=+105.498277843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.238407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.238448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.238460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.238478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.238489 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.342248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.342312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.342337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.342365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.342389 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.442601 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:40:52.81946652 +0000 UTC Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.445028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.445073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.445087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.445107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.445120 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.548390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.548451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.548461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.548481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.548492 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.652726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.652787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.652800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.652822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.652837 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.756217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.756291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.756311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.756341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.756365 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.859969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.860072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.860144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.860243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.860338 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.938381 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/0.log" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.938443 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ddda14a-730e-4c1f-afea-07c95221ba04" containerID="c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287" exitCode=1 Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.938482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerDied","Data":"c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.939107 4985 scope.go:117] "RemoveContainer" containerID="c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.958377 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.964114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.964241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.964412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.964586 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.964864 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:49Z","lastTransitionTime":"2026-01-27T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:49 crc kubenswrapper[4985]: I0127 08:54:49.974381 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.002457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.019069 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.035998 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.051141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068622 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068913 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.068923 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.085191 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.098481 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.114958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.128550 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.149929 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.171728 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.173598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.173643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.173661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.173688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.173705 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.187756 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.201709 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.219737 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.238500 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.253760 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.276644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.276704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.276720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.276747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.276766 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.379926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.379980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.379992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.380009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.380023 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.444323 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:03:34.763566335 +0000 UTC Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.453154 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.453168 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:50 crc kubenswrapper[4985]: E0127 08:54:50.453551 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.453227 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:50 crc kubenswrapper[4985]: E0127 08:54:50.453796 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:50 crc kubenswrapper[4985]: E0127 08:54:50.453594 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.453227 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:50 crc kubenswrapper[4985]: E0127 08:54:50.454114 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.482217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.482275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.482287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.482306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.482318 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.585643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.585693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.585706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.585727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.585743 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.688297 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.688344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.688355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.688373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.688386 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.790955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.791056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.791076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.791107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.791128 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.894383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.894438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.894454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.894480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.894499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.944763 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/0.log" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.944860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerStarted","Data":"611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0"} Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.966004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.986440 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.997787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.997822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.997832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.997852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:50 crc kubenswrapper[4985]: I0127 08:54:50.997873 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:50Z","lastTransitionTime":"2026-01-27T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.003364 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.024341 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.040620 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.052715 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.077293 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.091423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.100779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.100830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.100856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.100880 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.100896 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.102963 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.118829 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.133098 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.149065 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.163241 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.176059 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.188873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.202817 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.203738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.203812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.203828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.203852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.203866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.224414 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.236583 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.307085 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.307122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.307164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.307182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.307195 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.411100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.411155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.411167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.411188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.411204 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.445075 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:14:11.5469195 +0000 UTC Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.514278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.514757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.514961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.515162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.515585 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.619923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.619980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.619994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.620021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.620038 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.723496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.723631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.723649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.723674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.723693 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.827677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.827756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.827775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.827801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.827820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.930835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.930905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.930928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.930962 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:51 crc kubenswrapper[4985]: I0127 08:54:51.930984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:51Z","lastTransitionTime":"2026-01-27T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.033684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.033722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.033732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.033748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.033759 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.136903 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.136949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.136964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.136984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.136997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.242265 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.242341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.242358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.242387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.242405 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.345699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.345771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.345794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.345828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.345854 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.445257 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:42:26.782283645 +0000 UTC Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.448649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.449008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.449207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.449625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.449798 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.451854 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:52 crc kubenswrapper[4985]: E0127 08:54:52.452149 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.452642 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:52 crc kubenswrapper[4985]: E0127 08:54:52.452895 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.453247 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:52 crc kubenswrapper[4985]: E0127 08:54:52.453474 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.454144 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:52 crc kubenswrapper[4985]: E0127 08:54:52.454259 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.553249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.553297 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.553309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.553327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.553353 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.656356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.656442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.656461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.656492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.656552 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.760492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.760576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.760586 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.760606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.760623 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.863307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.863642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.863883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.864091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.864252 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.968843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.968910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.968929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.968954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:52 crc kubenswrapper[4985]: I0127 08:54:52.968976 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:52Z","lastTransitionTime":"2026-01-27T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.072075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.072167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.072184 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.072209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.072224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.176243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.176629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.176885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.177044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.177206 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.281031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.281121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.281149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.281185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.281210 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.385139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.385199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.385211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.385230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.385242 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.445835 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:26:48.610830401 +0000 UTC Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.488436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.488546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.488566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.488593 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.488612 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.592622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.593017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.593121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.593348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.593440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.603916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.604183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.604298 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.604574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.604696 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.622713 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.627577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.627648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.627663 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.627688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.627702 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.645390 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.650548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.650597 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.650608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.650626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.650640 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.668195 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.673742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.673810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.673822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.673851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.673863 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.693448 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.699498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.699569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.699579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.699618 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.699630 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.714937 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:53 crc kubenswrapper[4985]: E0127 08:54:53.715063 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.717293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.717374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.717388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.717412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.717426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.821339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.821392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.821436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.821456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.821469 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.925843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.925888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.925899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.925916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:53 crc kubenswrapper[4985]: I0127 08:54:53.925927 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:53Z","lastTransitionTime":"2026-01-27T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.029500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.030047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.030266 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.030781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.031313 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.134501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.135087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.135251 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.135410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.135583 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.239466 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.239551 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.239567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.239590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.239605 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.347795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.347852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.347870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.347895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.347913 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.446927 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:58:49.253303134 +0000 UTC Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.451002 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:54 crc kubenswrapper[4985]: E0127 08:54:54.451137 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.451790 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.451891 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452098 4985 scope.go:117] "RemoveContainer" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" Jan 27 08:54:54 crc kubenswrapper[4985]: E0127 08:54:54.452325 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.452959 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: E0127 08:54:54.452973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.453329 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:54 crc kubenswrapper[4985]: E0127 08:54:54.453413 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.556149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.556195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.556209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.556227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.556241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.659541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.659592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.659605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.659624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.659637 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.763332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.763387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.763398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.763415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.763427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.866865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.866927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.866939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.866957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.866969 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.963252 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/2.log" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.966909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.967313 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.968921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.968955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.968969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.968988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.968999 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:54Z","lastTransitionTime":"2026-01-27T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:54 crc kubenswrapper[4985]: I0127 08:54:54.992324 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.010539 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.024979 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.041105 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.061427 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.072158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.072210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.072222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.072243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.072257 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.082165 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.096941 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.121082 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.135277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.148361 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.166665 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.175241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.175298 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.175321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.175342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.175356 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.182906 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.197052 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.217602 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.233372 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.246838 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.266844 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.278157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.278400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.278412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.278432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.278443 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.281621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.382196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.382270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.382287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.382314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.382333 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.448009 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:16:15.300759086 +0000 UTC Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.485811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.485873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.485883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.485905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.485917 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.589480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.589560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.589573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.589593 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.589605 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.692415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.692469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.692484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.692505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.692542 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.795942 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.796025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.796049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.796083 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.796111 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.899108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.899147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.899156 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.899200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.899211 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:55Z","lastTransitionTime":"2026-01-27T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.973549 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/3.log" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.974288 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/2.log" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.978216 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" exitCode=1 Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.978278 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.978334 4985 scope.go:117] "RemoveContainer" containerID="0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826" Jan 27 08:54:55 crc kubenswrapper[4985]: I0127 08:54:55.979773 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:54:55 crc kubenswrapper[4985]: E0127 08:54:55.980085 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.001599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.001920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.002100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.002194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.002285 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.010172 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.028607 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.046658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.063469 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.078019 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.099687 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.104920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.104957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.104965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.104983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.104994 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.118207 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.137774 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.156570 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.170795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.193048 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:55Z\\\",\\\"message\\\":\\\"server-manager-metrics for network=default\\\\nI0127 08:54:55.394238 6962 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0127 08:54:55.394223 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-lp9n5\\\\nI0127 08:54:55.394254 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0127 08:54:55.394259 6962 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z]\\\\nI0127 08:54:55.394268 6962 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-lp9n5 in no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.208159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.208214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.208224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.208241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.208253 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.209076 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.223832 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.241860 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.256339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.272567 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.287311 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.298766 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.311724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.311777 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.311791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.311811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.311825 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.415986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.416029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.416041 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.416059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.416072 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.448731 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:50:26.522266051 +0000 UTC Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.451217 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.451276 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.451413 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:56 crc kubenswrapper[4985]: E0127 08:54:56.451603 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.451948 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:56 crc kubenswrapper[4985]: E0127 08:54:56.452097 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:56 crc kubenswrapper[4985]: E0127 08:54:56.452171 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:56 crc kubenswrapper[4985]: E0127 08:54:56.452282 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.479159 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0db1aeddec065e7b000a8494a25dcd6d16757437a78af2c71dd554c15c74a826\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:27Z\\\",\\\"message\\\":\\\"oved *v1.Node event handler 7\\\\nI0127 08:54:27.563980 6617 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 08:54:27.564011 6617 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 08:54:27.564037 6617 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 08:54:27.564061 6617 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 08:54:27.564190 6617 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564242 6617 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 08:54:27.564306 6617 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 08:54:27.564311 6617 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 08:54:27.564322 6617 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 08:54:27.564351 6617 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 08:54:27.564376 6617 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 08:54:27.564383 6617 factory.go:656] Stopping watch factory\\\\nI0127 08:54:27.564403 6617 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 08:54:27.564427 6617 ovnkube.go:599] Stopped ovnkube\\\\nI0127 08:54:27.564493 6617 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 08:54:27.564598 6617 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:55Z\\\",\\\"message\\\":\\\"server-manager-metrics for network=default\\\\nI0127 08:54:55.394238 6962 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0127 08:54:55.394223 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-lp9n5\\\\nI0127 08:54:55.394254 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0127 08:54:55.394259 6962 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z]\\\\nI0127 08:54:55.394268 6962 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-lp9n5 in no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.491216 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.511625 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.519584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.519660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.519732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.519772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.519799 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.533843 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.561976 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.581294 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.598275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.614175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.623115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.623173 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.623187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.623206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.623219 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.634212 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.652394 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.667648 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.693312 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.716906 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.726194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.726253 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.726267 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.726290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.726304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.754334 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.777197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.795178 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.817176 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.829454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.829543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.829558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.829581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.829597 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.842109 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:56Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.933185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.933893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.934061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.934224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.934381 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:56Z","lastTransitionTime":"2026-01-27T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.985746 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/3.log" Jan 27 08:54:56 crc kubenswrapper[4985]: I0127 08:54:56.991080 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:54:56 crc kubenswrapper[4985]: E0127 08:54:56.991301 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.007484 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"797a9d27-06cc-44aa-811f-881bdf2d1e7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a5ce9f15fd3505c744967be012b7eed6d909724e9b71ba07d7e9d68eb40cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c257e53b27d6dda5999a3053f9c62b54331bf034225c118dddfed685549827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87667099919fbe74e54396b3e8b538627769f2401d318326eb9a1d6a88bda640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ad4e68bbb4b8338f534dc026ca9f1fe9fb161b29fee8945f1789a66965dea2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.022007 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5z8px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997cb84-9997-4cf4-8794-2eb145a5c324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520da40d4496e0fd47e4e091e9eba349e1bd3bf2f97898f6aac53a1b1b8925f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkwj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5z8px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.042730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.043145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.043299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.043438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.043630 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.048234 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea1cd1b8-a185-461d-9302-aa03be205225\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67af065c12addbef44849906b964718074de1f0d7a0b87a028bf989ec28f82ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3459a9dccba77fc189ecac1d0186520dfbd4ce6ea40d9e5b2fdcf2ddabd7875d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d313b9f0d95603fdc152e96acf5d788d7792df3413c3b8bc25aad20bb702c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4035a5eff34b30f1fab897a8d8b6026437a1ab1f521f062241a79d070b94bdc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6da775953e78b30b1f606e26608dbedae8e08fb12eba87f65356bc439079f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32797fe9e64f0e4f0332fa40261fb0feb75b6dc783ff78b0aea5ccb0d4a9ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaa036060edc039a0c9621176b80823cb9913176b1a1e6b06aa033a8e479b96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfgfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rfnvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.067762 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad423d26-ea00-4a86-8eed-bba6433ce382\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031493bfd9eba63a4627b6a0ec45bc556e8a6cae213a84f7b158e2bede2da5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a6371f6e7f2b3811af6ec717f15eff6a85c8c39caf011e1173a4fcaf20f29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hg5xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s74hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.100322 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7e08bcf-6937-4593-9736-170df821dd88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1dab42ffc04840ad2205b50c568ece39850188ccd9f97d8186c7f0b86e06805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f7c69cc8bc134cfbf3552f10eb7abd8b5df115fef6ce86a2658259a9635bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0450f271c01ac528419de0165dd9b88b8d5913062a7cd5affce7050842f91a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33bda4f575f8f14a3439f00a39eda2ee61f7b38500f5372c17d014df1098535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c502ce0c2374fb1b79cd8dc7e7c9eba4a67f998fee012827524d775eccbe4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d423f866a41d6c8a926fbabe4f0a69519e10a9448502c6363b35cb550ba904d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917f905b67f494d8d44f5a00e53ae72cb9c2b307f7e7da17af7fd20db4ed8704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8692ba1b6e917a57164d549dba0fc26ad49b14b9bc8a39772daddfcd5f70a008\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.128932 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a4e626ecb5f5c1f869a0271e66541b76d4ab763322e2d626c4d86af64bccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.148022 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.148118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.148144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.148179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.148241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.150591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.172397 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqdrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ddda14a-730e-4c1f-afea-07c95221ba04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:49Z\\\",\\\"message\\\":\\\"2026-01-27T08:54:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1\\\\n2026-01-27T08:54:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51a03737-5946-4b3b-8d44-7172f998ced1 to /host/opt/cni/bin/\\\\n2026-01-27T08:54:04Z [verbose] multus-daemon started\\\\n2026-01-27T08:54:04Z [verbose] Readiness Indicator file check\\\\n2026-01-27T08:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqdrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.190940 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.217593 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T08:54:55Z\\\",\\\"message\\\":\\\"server-manager-metrics for network=default\\\\nI0127 08:54:55.394238 6962 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0127 08:54:55.394223 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-lp9n5\\\\nI0127 08:54:55.394254 6962 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0127 08:54:55.394259 6962 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:55Z is after 2025-08-24T17:21:41Z]\\\\nI0127 08:54:55.394268 6962 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-lp9n5 in no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kqdf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.234215 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cscdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c870945-eecc-4954-a91b-d02cef8f98e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k24x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cscdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.250738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.251025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.251174 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.251281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.251381 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.251772 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.266343 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b46c56d522e6b6c9c18b7bd9fc104d6657644d0b41fdcad15ac9fb802ca5fd0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9506e391bd8c293440c55dc71a239b25d6d6b3431ad22c5ca699baa4d09c325b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.280185 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.297622 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed8d2575835b3ab3d3197bbfbc027e2de0e2aefe10c971838bf4b72f804c4582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.315304 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c066dd2f-48d4-4f4f-935d-0e772678e610\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beb432862ae348b405b1a51f1a187fd469f9df0605fa9e8a39b1c9a34ae8e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vtb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp9n5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.330962 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ba17902-809a-4efc-9a8c-6f9b611c2af9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00cd6621bcfc77aeb9ecc4d6894b33cd3ef1f975b3c593e0eff812ef51978f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:54:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.348838 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46f88416-c54a-4ab5-a4d3-07f71bae9c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d6d903bfaaedef392fdd94df2da1e4d965ddeb6b25d64b225725b39dd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8ce06c7792357df5655ff0be13f13c8b06bb1a2eb27c4bbbff11b94a62f29e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7554f25de6e4649eafd52356c5c255d0a2dc73b0dd6c9a9bcea3ee383bef17db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:54:57Z is after 2025-08-24T17:21:41Z" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.354976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.355046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.355061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.355087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.355100 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.449221 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:57:44.299962404 +0000 UTC Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.458388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.458439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.458452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.458473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.458488 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.562346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.562398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.562421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.562447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.562467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.665662 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.665744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.665768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.665805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.665831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.769742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.769821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.769849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.769884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.769909 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.872842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.872898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.872910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.872929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.872941 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.977376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.977449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.977474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.977500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:57 crc kubenswrapper[4985]: I0127 08:54:57.977543 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:57Z","lastTransitionTime":"2026-01-27T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.080935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.081026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.081044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.081068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.081095 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.183790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.183830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.183841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.183860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.183873 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.287596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.287655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.287668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.287687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.287701 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.391014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.391061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.391073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.391092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.391105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.449771 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:32:05.042499644 +0000 UTC Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.450947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.450997 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:54:58 crc kubenswrapper[4985]: E0127 08:54:58.451228 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.451437 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.451581 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:54:58 crc kubenswrapper[4985]: E0127 08:54:58.451712 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:54:58 crc kubenswrapper[4985]: E0127 08:54:58.452102 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:54:58 crc kubenswrapper[4985]: E0127 08:54:58.452347 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.494649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.494707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.494735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.494761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.494781 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.597747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.597797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.597814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.597837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.597854 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.700849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.701206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.701318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.701416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.701594 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.804323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.804362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.804375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.804393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.804405 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.906997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.907032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.907041 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.907056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:58 crc kubenswrapper[4985]: I0127 08:54:58.907065 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:58Z","lastTransitionTime":"2026-01-27T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.010181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.010226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.010237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.010256 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.010267 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.113168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.113654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.113846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.113981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.114105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.217402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.217452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.217463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.217479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.217489 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.320792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.320835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.320843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.320860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.320872 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.424013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.424063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.424075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.424093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.424104 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.450845 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:45:14.448253622 +0000 UTC Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.527168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.527701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.527718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.527738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.527751 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.631402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.631794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.631888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.631991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.632078 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.734827 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.734877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.734889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.734909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.734927 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.838467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.838526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.838543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.838564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.838584 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.941943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.942020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.942032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.942049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:54:59 crc kubenswrapper[4985]: I0127 08:54:59.942062 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:54:59Z","lastTransitionTime":"2026-01-27T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.044848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.045185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.045263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.045360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.045451 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.149268 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.150112 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.150589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.151082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.151252 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.255202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.255260 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.255278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.255304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.255322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.357948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.358020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.358043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.358072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.358098 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.440301 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.440461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.440684 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.440864 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.440948 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:04.440923259 +0000 UTC m=+148.732018130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.441230 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.441271 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:04.44125642 +0000 UTC m=+148.732351291 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.441659 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:04.44160765 +0000 UTC m=+148.732702531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.451230 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:18:49.753053637 +0000 UTC Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.452391 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.452435 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.452490 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.452898 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.452880 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.453084 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.453226 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.453364 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.461001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.461069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.461096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.461128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.461153 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.542302 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.542370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542645 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542727 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542746 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542666 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542795 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542817 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542827 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:04.542801204 +0000 UTC m=+148.833896055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:55:00 crc kubenswrapper[4985]: E0127 08:55:00.542932 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:04.542907467 +0000 UTC m=+148.834002338 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.564503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.564587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.564600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.564624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.564637 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.667865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.667930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.667943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.667970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.667985 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.771779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.771853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.771872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.771903 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.771923 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.927949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.928014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.928026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.928055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:00 crc kubenswrapper[4985]: I0127 08:55:00.928068 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:00Z","lastTransitionTime":"2026-01-27T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.030536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.030574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.030584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.030601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.030614 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.134120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.134198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.134223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.134257 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.134291 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.238096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.238167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.238195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.238230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.238253 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.341285 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.341338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.341353 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.341423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.341443 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.444937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.445322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.445405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.445556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.445640 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.451626 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:28:28.272556685 +0000 UTC Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.547419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.547467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.547479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.547499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.547536 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.650668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.650740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.650758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.650782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.650794 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.753865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.754236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.754325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.754433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.754551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.858440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.858534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.858556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.858582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.858603 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.961661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.961788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.961806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.961832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:01 crc kubenswrapper[4985]: I0127 08:55:01.961851 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:01Z","lastTransitionTime":"2026-01-27T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.065230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.065302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.065326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.065357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.065383 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.169373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.169479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.169500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.169558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.169577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.274088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.274144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.274162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.274188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.274209 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.378283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.378729 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.378957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.379187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.379412 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.451867 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:20:42.775603816 +0000 UTC Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.452678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.452803 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:02 crc kubenswrapper[4985]: E0127 08:55:02.453166 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:02 crc kubenswrapper[4985]: E0127 08:55:02.453235 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.453550 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:02 crc kubenswrapper[4985]: E0127 08:55:02.453693 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.453771 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:02 crc kubenswrapper[4985]: E0127 08:55:02.453876 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.471003 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.483899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.484001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.484021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.484086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.484109 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.588238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.588339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.588434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.588462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.588483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.692565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.692633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.692653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.692679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.692701 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.795804 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.795865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.795883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.795908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.795927 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.898344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.898457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.898488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.898547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:02 crc kubenswrapper[4985]: I0127 08:55:02.898567 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:02Z","lastTransitionTime":"2026-01-27T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.001698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.002070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.002254 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.002406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.002579 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.106136 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.106195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.106219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.106250 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.106276 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.210128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.210203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.210221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.210255 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.210277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.313707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.314161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.314436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.314671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.314870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.420741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.420809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.420834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.420863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.420887 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.452496 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:23:08.869551441 +0000 UTC Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.523884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.523952 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.523969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.523996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.524015 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.627624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.627695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.627711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.627739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.627767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.730957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.730995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.731006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.731023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.731035 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.835026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.835413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.835619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.836021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.836249 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.939858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.940234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.940332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.940436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:03 crc kubenswrapper[4985]: I0127 08:55:03.940540 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:03Z","lastTransitionTime":"2026-01-27T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.051166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.051244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.051261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.051285 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.051301 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.103073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.103130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.103144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.103164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.103176 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.122558 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.129146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.129222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.129324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.129400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.129432 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.149477 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.154594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.154662 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.154680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.154709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.154727 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.175149 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.180445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.180494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.180525 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.180547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.180559 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.197831 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.202304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.202344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.202359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.202380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.202400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.217872 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T08:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"095ded87-0bbb-47a5-b76f-f5bb300a00ab\\\",\\\"systemUUID\\\":\\\"66a0621c-9cbd-4c42-8f6a-941d6ebd53fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.218029 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.220105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.220204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.220226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.220846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.221107 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.323796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.323870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.323893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.323921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.323939 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.426735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.426806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.426821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.426844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.426861 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.451968 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.452194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.452447 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.452506 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.452672 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:35:05.196717895 +0000 UTC Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.452644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.452617 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.452946 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:04 crc kubenswrapper[4985]: E0127 08:55:04.453127 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.530553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.530982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.531094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.531200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.531269 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.635767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.636270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.636393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.636568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.636685 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.740165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.740640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.740678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.740702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.740720 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.844163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.844227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.844240 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.844262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.844284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.947232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.947281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.947293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.947312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:04 crc kubenswrapper[4985]: I0127 08:55:04.947324 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:04Z","lastTransitionTime":"2026-01-27T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.049718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.049763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.049775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.049793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.049805 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.152480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.152553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.152568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.152589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.152618 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.256119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.256157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.256168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.256189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.256203 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.360172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.360234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.360248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.360272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.360287 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.452894 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:45:46.865281509 +0000 UTC Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.463541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.463622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.463676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.463696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.463737 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.567009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.567064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.567078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.567104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.567121 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.669710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.669750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.669758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.669774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.669784 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.772770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.772832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.772844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.772867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.772885 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.878115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.878217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.878231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.878254 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.878268 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.981947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.982034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.982053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.982077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:05 crc kubenswrapper[4985]: I0127 08:55:05.982095 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:05Z","lastTransitionTime":"2026-01-27T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.085995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.086071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.086093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.086123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.086143 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.189416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.189475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.189497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.189558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.189583 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.293263 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.293324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.293342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.293365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.293381 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.398348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.398439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.398462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.398489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.398508 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.451799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.451929 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:06 crc kubenswrapper[4985]: E0127 08:55:06.452723 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.452010 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.451952 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:06 crc kubenswrapper[4985]: E0127 08:55:06.452893 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:06 crc kubenswrapper[4985]: E0127 08:55:06.453090 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.453123 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:06:09.682933732 +0000 UTC Jan 27 08:55:06 crc kubenswrapper[4985]: E0127 08:55:06.453548 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.474974 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T08:53:55Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 08:53:55.866201 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 08:53:55.866247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 08:53:55.866252 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 08:53:55.866257 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 08:53:55.866955 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769504019\\\\\\\\\\\\\\\" (2026-01-27 08:53:39 +0000 UTC to 2026-02-26 08:53:40 +0000 UTC (now=2026-01-27 08:53:55.866923826 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769504030\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769504030\\\\\\\\\\\\\\\" (2026-01-27 07:53:50 +0000 UTC to 2027-01-27 07:53:50 +0000 UTC (now=2026-01-27 08:53:55.867111136 +0000 UTC))\\\\\\\"\\\\nI0127 08:53:55.867161 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 08:53:55.867183 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 08:53:55.867215 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231041040/tls.crt::/tmp/serving-cert-231041040/tls.key\\\\\\\"\\\\nI0127 08:53:55.867343 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0127 08:53:55.868835 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T08:53:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T08:53:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T08:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T08:53:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.495522 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T08:53:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T08:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.501839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.501892 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.501902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.501923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.501935 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.589545 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.589484116 podStartE2EDuration="1m11.589484116s" podCreationTimestamp="2026-01-27 08:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.588458045 +0000 UTC m=+90.879552896" watchObservedRunningTime="2026-01-27 08:55:06.589484116 +0000 UTC m=+90.880578997" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.605688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.605766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.605783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.605815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.605836 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.658892 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podStartSLOduration=64.658865346 podStartE2EDuration="1m4.658865346s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.658764733 +0000 UTC m=+90.949859574" watchObservedRunningTime="2026-01-27 08:55:06.658865346 +0000 UTC m=+90.949960187" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.675397 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dlccz" podStartSLOduration=64.675371352 podStartE2EDuration="1m4.675371352s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.675009211 +0000 UTC m=+90.966104072" watchObservedRunningTime="2026-01-27 08:55:06.675371352 +0000 UTC m=+90.966466193" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.698703 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.69868416 podStartE2EDuration="1m9.69868416s" podCreationTimestamp="2026-01-27 08:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.697937398 +0000 UTC m=+90.989032259" watchObservedRunningTime="2026-01-27 08:55:06.69868416 +0000 UTC m=+90.989779001" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.708640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.708693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.708707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.708731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.708742 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.712333 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.712313249 podStartE2EDuration="37.712313249s" podCreationTimestamp="2026-01-27 08:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.712291558 +0000 UTC m=+91.003386439" watchObservedRunningTime="2026-01-27 08:55:06.712313249 +0000 UTC m=+91.003408090" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.744036 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5z8px" podStartSLOduration=64.744012699 podStartE2EDuration="1m4.744012699s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.72499099 +0000 UTC m=+91.016085831" watchObservedRunningTime="2026-01-27 08:55:06.744012699 +0000 UTC m=+91.035107540" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.744557 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rfnvj" podStartSLOduration=64.744548636 podStartE2EDuration="1m4.744548636s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.742736302 +0000 UTC m=+91.033831163" watchObservedRunningTime="2026-01-27 08:55:06.744548636 +0000 UTC m=+91.035643497" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.758766 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s74hq" podStartSLOduration=64.758732122 podStartE2EDuration="1m4.758732122s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.75868844 +0000 UTC m=+91.049783301" watchObservedRunningTime="2026-01-27 08:55:06.758732122 +0000 UTC m=+91.049826993" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.772321 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.772281237 podStartE2EDuration="4.772281237s" podCreationTimestamp="2026-01-27 08:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.770550966 +0000 UTC m=+91.061645817" watchObservedRunningTime="2026-01-27 08:55:06.772281237 +0000 UTC m=+91.063376078" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.812069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.812133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.812146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.812164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.812175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.915013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.915054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.915064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.915086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:06 crc kubenswrapper[4985]: I0127 08:55:06.915100 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:06Z","lastTransitionTime":"2026-01-27T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.018552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.018617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.018629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.018654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.018668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.122270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.122348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.122373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.122408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.122436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.226134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.226487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.226631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.226733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.226820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.330152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.330240 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.330260 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.330287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.330305 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.433225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.433254 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.433262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.433277 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.433286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.453588 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:20:28.795332365 +0000 UTC Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.536444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.536477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.536485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.536500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.536531 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.640046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.640119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.640140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.640170 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.640191 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.743358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.743402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.743412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.743435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.743448 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.846746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.846798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.846814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.846834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.846848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.950046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.950125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.950134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.950154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:07 crc kubenswrapper[4985]: I0127 08:55:07.950165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:07Z","lastTransitionTime":"2026-01-27T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.053756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.053823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.053842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.053868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.053888 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.156577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.156664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.156689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.156719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.156743 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.260329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.260383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.260394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.260414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.260450 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.363108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.363166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.363180 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.363200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.363218 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.453540 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.453662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:08 crc kubenswrapper[4985]: E0127 08:55:08.453691 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.453742 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:06:46.447580273 +0000 UTC Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.453758 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:08 crc kubenswrapper[4985]: E0127 08:55:08.453858 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.453888 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:08 crc kubenswrapper[4985]: E0127 08:55:08.454092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:08 crc kubenswrapper[4985]: E0127 08:55:08.454127 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.466570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.466676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.466702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.466731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.466752 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.570381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.570446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.570463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.570488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.570535 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.673650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.673726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.673748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.673781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.673805 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.776983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.777042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.777054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.777076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.777090 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.880575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.880625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.880636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.880658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.880668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.984221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.984311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.984328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.984354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:08 crc kubenswrapper[4985]: I0127 08:55:08.984372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:08Z","lastTransitionTime":"2026-01-27T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.087243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.087288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.087313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.087331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.087340 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.191140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.191222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.191248 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.191281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.191304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.294994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.295037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.295046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.295062 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.295072 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.398588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.398660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.398682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.398714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.398739 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.453944 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:18:51.592578576 +0000 UTC Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.502282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.502664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.502763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.502982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.503378 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.607725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.607794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.607806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.607826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.607839 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.712146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.712195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.712210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.712230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.712245 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.815672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.815748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.815772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.815803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.815830 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.920203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.920257 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.920270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.920290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:09 crc kubenswrapper[4985]: I0127 08:55:09.920306 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:09Z","lastTransitionTime":"2026-01-27T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.023917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.023996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.024020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.024054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.024076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.127353 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.127421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.127430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.127452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.127465 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.230891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.231404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.231671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.231941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.232140 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.335839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.335966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.335987 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.336016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.336037 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.440215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.440283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.440307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.440337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.440358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.451875 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.451969 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.451892 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.452058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:10 crc kubenswrapper[4985]: E0127 08:55:10.452170 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:10 crc kubenswrapper[4985]: E0127 08:55:10.452414 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:10 crc kubenswrapper[4985]: E0127 08:55:10.452593 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:10 crc kubenswrapper[4985]: E0127 08:55:10.452777 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.454127 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:51:02.240505349 +0000 UTC Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.543887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.543946 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.543957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.543977 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.543989 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.646973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.647027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.647039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.647063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.647078 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.751039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.751105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.751126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.751156 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.751178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.854249 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.854306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.854324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.854349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.854368 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.956739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.956844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.956867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.956901 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:10 crc kubenswrapper[4985]: I0127 08:55:10.956924 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:10Z","lastTransitionTime":"2026-01-27T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.060008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.060085 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.060098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.060121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.060134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.163055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.163140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.163162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.163192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.163217 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.266322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.266394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.266418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.266449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.266471 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.368990 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.369048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.369065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.369094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.369110 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.452263 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:55:11 crc kubenswrapper[4985]: E0127 08:55:11.452575 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.454840 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:46:23.612169924 +0000 UTC Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.471997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.472065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.472078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.472100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.472117 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.575207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.575271 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.575282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.575305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.575328 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.679028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.679083 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.679095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.679114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.679128 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.781716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.781762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.781770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.781786 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.781798 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.885764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.886167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.886264 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.886408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.886503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.990085 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.990368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.990430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.990503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:11 crc kubenswrapper[4985]: I0127 08:55:11.990589 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:11Z","lastTransitionTime":"2026-01-27T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.093751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.093840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.093863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.093893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.093914 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.196444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.196499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.196545 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.196568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.196583 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.300623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.300689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.300704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.300726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.300742 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.403373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.403436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.403446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.403466 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.403476 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.451840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.451929 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.451983 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.451955 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:12 crc kubenswrapper[4985]: E0127 08:55:12.452062 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:12 crc kubenswrapper[4985]: E0127 08:55:12.452191 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:12 crc kubenswrapper[4985]: E0127 08:55:12.452265 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:12 crc kubenswrapper[4985]: E0127 08:55:12.452402 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.454951 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:27:24.38578979 +0000 UTC Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.506354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.506399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.506410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.506426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.506438 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.609570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.609601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.609610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.609625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.609644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.711958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.711997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.712005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.712020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.712030 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.814959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.815025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.815045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.815069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.815083 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.918205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.918303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.918326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.918376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:12 crc kubenswrapper[4985]: I0127 08:55:12.918406 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:12Z","lastTransitionTime":"2026-01-27T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.021836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.021902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.021921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.021953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.021972 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.125091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.125132 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.125145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.125163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.125173 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.227958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.228008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.228019 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.228038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.228050 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.331663 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.331718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.331750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.331774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.331788 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.435332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.435857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.435955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.436059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.436175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.455557 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:51:54.05097813 +0000 UTC Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.540388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.540991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.541154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.541329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.541496 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.645856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.646402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.646417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.646437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.646452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.749127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.749178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.749208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.749224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.749234 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.852827 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.852889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.852927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.852955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.852974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.955884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.955966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.955980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.956001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:13 crc kubenswrapper[4985]: I0127 08:55:13.956014 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:13Z","lastTransitionTime":"2026-01-27T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.058407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.058483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.058505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.058566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.058586 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:14Z","lastTransitionTime":"2026-01-27T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.161430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.161495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.161513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.161595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.161617 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:14Z","lastTransitionTime":"2026-01-27T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.264060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.264102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.264114 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.264130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.264139 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:14Z","lastTransitionTime":"2026-01-27T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.368019 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.368116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.368141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.368174 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.368197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:14Z","lastTransitionTime":"2026-01-27T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.376738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.376786 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.376801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.376820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.376836 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T08:55:14Z","lastTransitionTime":"2026-01-27T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.430236 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cqdrf" podStartSLOduration=72.430216424 podStartE2EDuration="1m12.430216424s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:06.823471063 +0000 UTC m=+91.114565914" watchObservedRunningTime="2026-01-27 08:55:14.430216424 +0000 UTC m=+98.721311265" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.430481 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj"] Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.430924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.433490 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.433587 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.434804 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.435465 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.453892 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:14 crc kubenswrapper[4985]: E0127 08:55:14.454045 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.454281 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:14 crc kubenswrapper[4985]: E0127 08:55:14.454360 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.454503 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.455918 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:26:45.096721684 +0000 UTC Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.455975 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.461963 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:14 crc kubenswrapper[4985]: E0127 08:55:14.462604 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:14 crc kubenswrapper[4985]: E0127 08:55:14.462974 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.467449 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.474554 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.474513283 podStartE2EDuration="1m18.474513283s" podCreationTimestamp="2026-01-27 08:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:14.470048198 +0000 UTC m=+98.761143049" watchObservedRunningTime="2026-01-27 08:55:14.474513283 +0000 UTC m=+98.765608124" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.491130 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.491214 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa0bf362-c3f7-4fce-9ef0-b148106036fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.491300 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.491322 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0bf362-c3f7-4fce-9ef0-b148106036fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.491386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0bf362-c3f7-4fce-9ef0-b148106036fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.592475 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.592854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0bf362-c3f7-4fce-9ef0-b148106036fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.592957 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0bf362-c3f7-4fce-9ef0-b148106036fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.592720 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.593306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.593399 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa0bf362-c3f7-4fce-9ef0-b148106036fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.593424 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aa0bf362-c3f7-4fce-9ef0-b148106036fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.594903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0bf362-c3f7-4fce-9ef0-b148106036fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.601933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0bf362-c3f7-4fce-9ef0-b148106036fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.616599 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa0bf362-c3f7-4fce-9ef0-b148106036fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rzqbj\" (UID: \"aa0bf362-c3f7-4fce-9ef0-b148106036fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: I0127 08:55:14.756956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" Jan 27 08:55:14 crc kubenswrapper[4985]: W0127 08:55:14.774895 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa0bf362_c3f7_4fce_9ef0_b148106036fb.slice/crio-d85a5cc846cb6a6db1d07e5ac237f5d18705e44e7d0fea1c6a85c2ba5174c155 WatchSource:0}: Error finding container d85a5cc846cb6a6db1d07e5ac237f5d18705e44e7d0fea1c6a85c2ba5174c155: Status 404 returned error can't find the container with id d85a5cc846cb6a6db1d07e5ac237f5d18705e44e7d0fea1c6a85c2ba5174c155 Jan 27 08:55:15 crc kubenswrapper[4985]: I0127 08:55:15.070306 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" event={"ID":"aa0bf362-c3f7-4fce-9ef0-b148106036fb","Type":"ContainerStarted","Data":"71b681d1f1982c0984e91504b0246f15cb388cdbcf0fb63ccbed24cc1f5bae74"} Jan 27 08:55:15 crc kubenswrapper[4985]: I0127 08:55:15.070380 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" event={"ID":"aa0bf362-c3f7-4fce-9ef0-b148106036fb","Type":"ContainerStarted","Data":"d85a5cc846cb6a6db1d07e5ac237f5d18705e44e7d0fea1c6a85c2ba5174c155"} Jan 27 08:55:15 crc kubenswrapper[4985]: I0127 08:55:15.087228 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rzqbj" podStartSLOduration=73.087204785 podStartE2EDuration="1m13.087204785s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:15.087074022 +0000 UTC m=+99.378168883" watchObservedRunningTime="2026-01-27 08:55:15.087204785 +0000 UTC m=+99.378299626" Jan 27 08:55:16 crc kubenswrapper[4985]: I0127 08:55:16.451661 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:16 crc kubenswrapper[4985]: I0127 08:55:16.451757 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:16 crc kubenswrapper[4985]: I0127 08:55:16.454283 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:16 crc kubenswrapper[4985]: I0127 08:55:16.454454 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:16 crc kubenswrapper[4985]: E0127 08:55:16.454595 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:16 crc kubenswrapper[4985]: E0127 08:55:16.454638 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:16 crc kubenswrapper[4985]: E0127 08:55:16.454724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:16 crc kubenswrapper[4985]: E0127 08:55:16.454772 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:18 crc kubenswrapper[4985]: I0127 08:55:18.451184 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:18 crc kubenswrapper[4985]: I0127 08:55:18.451241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:18 crc kubenswrapper[4985]: I0127 08:55:18.451220 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:18 crc kubenswrapper[4985]: E0127 08:55:18.451381 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:18 crc kubenswrapper[4985]: E0127 08:55:18.451534 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:18 crc kubenswrapper[4985]: E0127 08:55:18.451707 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:18 crc kubenswrapper[4985]: I0127 08:55:18.452057 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:18 crc kubenswrapper[4985]: E0127 08:55:18.452396 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:20 crc kubenswrapper[4985]: I0127 08:55:20.451426 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:20 crc kubenswrapper[4985]: I0127 08:55:20.451793 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:20 crc kubenswrapper[4985]: I0127 08:55:20.452689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:20 crc kubenswrapper[4985]: E0127 08:55:20.453303 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:20 crc kubenswrapper[4985]: I0127 08:55:20.452689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:20 crc kubenswrapper[4985]: E0127 08:55:20.453825 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:20 crc kubenswrapper[4985]: E0127 08:55:20.453956 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:20 crc kubenswrapper[4985]: E0127 08:55:20.454032 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:21 crc kubenswrapper[4985]: I0127 08:55:21.269977 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:21 crc kubenswrapper[4985]: E0127 08:55:21.270127 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:55:21 crc kubenswrapper[4985]: E0127 08:55:21.270539 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs podName:5c870945-eecc-4954-a91b-d02cef8f98e2 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:25.270505415 +0000 UTC m=+169.561600246 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs") pod "network-metrics-daemon-cscdv" (UID: "5c870945-eecc-4954-a91b-d02cef8f98e2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 08:55:22 crc kubenswrapper[4985]: I0127 08:55:22.451721 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:22 crc kubenswrapper[4985]: I0127 08:55:22.451721 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:22 crc kubenswrapper[4985]: E0127 08:55:22.451864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:22 crc kubenswrapper[4985]: I0127 08:55:22.452231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:22 crc kubenswrapper[4985]: I0127 08:55:22.452285 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:22 crc kubenswrapper[4985]: E0127 08:55:22.452374 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:22 crc kubenswrapper[4985]: E0127 08:55:22.452644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:22 crc kubenswrapper[4985]: I0127 08:55:22.452754 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:55:22 crc kubenswrapper[4985]: E0127 08:55:22.452876 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:22 crc kubenswrapper[4985]: E0127 08:55:22.452974 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:55:24 crc kubenswrapper[4985]: I0127 08:55:24.451820 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:24 crc kubenswrapper[4985]: I0127 08:55:24.451850 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:24 crc kubenswrapper[4985]: I0127 08:55:24.452651 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:24 crc kubenswrapper[4985]: I0127 08:55:24.452736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:24 crc kubenswrapper[4985]: E0127 08:55:24.452895 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:24 crc kubenswrapper[4985]: E0127 08:55:24.452997 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:24 crc kubenswrapper[4985]: E0127 08:55:24.453137 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:24 crc kubenswrapper[4985]: E0127 08:55:24.453269 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:26 crc kubenswrapper[4985]: I0127 08:55:26.452949 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:26 crc kubenswrapper[4985]: I0127 08:55:26.452955 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:26 crc kubenswrapper[4985]: I0127 08:55:26.452972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:26 crc kubenswrapper[4985]: E0127 08:55:26.454219 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:26 crc kubenswrapper[4985]: I0127 08:55:26.454280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:26 crc kubenswrapper[4985]: E0127 08:55:26.454598 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:26 crc kubenswrapper[4985]: E0127 08:55:26.454641 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:26 crc kubenswrapper[4985]: E0127 08:55:26.454431 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:28 crc kubenswrapper[4985]: I0127 08:55:28.451258 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:28 crc kubenswrapper[4985]: I0127 08:55:28.451424 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:28 crc kubenswrapper[4985]: I0127 08:55:28.451552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:28 crc kubenswrapper[4985]: E0127 08:55:28.453243 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:28 crc kubenswrapper[4985]: I0127 08:55:28.451614 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:28 crc kubenswrapper[4985]: E0127 08:55:28.453390 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:28 crc kubenswrapper[4985]: E0127 08:55:28.453573 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:28 crc kubenswrapper[4985]: E0127 08:55:28.453094 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:30 crc kubenswrapper[4985]: I0127 08:55:30.451429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:30 crc kubenswrapper[4985]: I0127 08:55:30.451431 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:30 crc kubenswrapper[4985]: I0127 08:55:30.451474 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:30 crc kubenswrapper[4985]: E0127 08:55:30.452790 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:30 crc kubenswrapper[4985]: E0127 08:55:30.452942 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:30 crc kubenswrapper[4985]: I0127 08:55:30.452630 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:30 crc kubenswrapper[4985]: E0127 08:55:30.453092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:30 crc kubenswrapper[4985]: E0127 08:55:30.453581 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:32 crc kubenswrapper[4985]: I0127 08:55:32.451276 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:32 crc kubenswrapper[4985]: I0127 08:55:32.451271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:32 crc kubenswrapper[4985]: I0127 08:55:32.451357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:32 crc kubenswrapper[4985]: I0127 08:55:32.451459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:32 crc kubenswrapper[4985]: E0127 08:55:32.451692 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:32 crc kubenswrapper[4985]: E0127 08:55:32.451809 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:32 crc kubenswrapper[4985]: E0127 08:55:32.451934 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:32 crc kubenswrapper[4985]: E0127 08:55:32.452063 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:34 crc kubenswrapper[4985]: I0127 08:55:34.451692 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:34 crc kubenswrapper[4985]: I0127 08:55:34.451692 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:34 crc kubenswrapper[4985]: I0127 08:55:34.451935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:34 crc kubenswrapper[4985]: I0127 08:55:34.451895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:34 crc kubenswrapper[4985]: E0127 08:55:34.452648 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:34 crc kubenswrapper[4985]: E0127 08:55:34.452713 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:34 crc kubenswrapper[4985]: E0127 08:55:34.452830 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:34 crc kubenswrapper[4985]: E0127 08:55:34.453019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:35 crc kubenswrapper[4985]: I0127 08:55:35.453210 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:55:35 crc kubenswrapper[4985]: E0127 08:55:35.453460 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kqdf4_openshift-ovn-kubernetes(c6239c91-d93d-4db8-ac4b-d44ddbc7c100)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.146860 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/1.log" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.147609 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/0.log" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.147992 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ddda14a-730e-4c1f-afea-07c95221ba04" containerID="611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0" exitCode=1 Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.148110 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerDied","Data":"611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0"} Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.148198 4985 scope.go:117] "RemoveContainer" containerID="c411afefb82d30542592e4959d2495036c0408459ba2a16aff75223ad0d29287" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.150024 4985 scope.go:117] "RemoveContainer" containerID="611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.150786 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cqdrf_openshift-multus(1ddda14a-730e-4c1f-afea-07c95221ba04)\"" pod="openshift-multus/multus-cqdrf" podUID="1ddda14a-730e-4c1f-afea-07c95221ba04" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.432496 4985 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.451359 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.451427 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.451426 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:36 crc kubenswrapper[4985]: I0127 08:55:36.454120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.454119 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.454413 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.454472 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.454569 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:36 crc kubenswrapper[4985]: E0127 08:55:36.550187 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:55:37 crc kubenswrapper[4985]: I0127 08:55:37.160504 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/1.log" Jan 27 08:55:38 crc kubenswrapper[4985]: I0127 08:55:38.451071 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:38 crc kubenswrapper[4985]: I0127 08:55:38.451192 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:38 crc kubenswrapper[4985]: E0127 08:55:38.451251 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:38 crc kubenswrapper[4985]: E0127 08:55:38.451428 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:38 crc kubenswrapper[4985]: I0127 08:55:38.451542 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:38 crc kubenswrapper[4985]: E0127 08:55:38.451604 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:38 crc kubenswrapper[4985]: I0127 08:55:38.451667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:38 crc kubenswrapper[4985]: E0127 08:55:38.451741 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:40 crc kubenswrapper[4985]: I0127 08:55:40.451532 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:40 crc kubenswrapper[4985]: I0127 08:55:40.451574 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:40 crc kubenswrapper[4985]: I0127 08:55:40.451637 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:40 crc kubenswrapper[4985]: I0127 08:55:40.451542 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:40 crc kubenswrapper[4985]: E0127 08:55:40.451709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:40 crc kubenswrapper[4985]: E0127 08:55:40.451871 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:40 crc kubenswrapper[4985]: E0127 08:55:40.451984 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:40 crc kubenswrapper[4985]: E0127 08:55:40.452017 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:41 crc kubenswrapper[4985]: E0127 08:55:41.551898 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:55:42 crc kubenswrapper[4985]: I0127 08:55:42.451692 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:42 crc kubenswrapper[4985]: I0127 08:55:42.451733 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:42 crc kubenswrapper[4985]: I0127 08:55:42.451733 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:42 crc kubenswrapper[4985]: I0127 08:55:42.451761 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:42 crc kubenswrapper[4985]: E0127 08:55:42.451907 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:42 crc kubenswrapper[4985]: E0127 08:55:42.452020 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:42 crc kubenswrapper[4985]: E0127 08:55:42.452140 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:42 crc kubenswrapper[4985]: E0127 08:55:42.452300 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:44 crc kubenswrapper[4985]: I0127 08:55:44.451685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:44 crc kubenswrapper[4985]: I0127 08:55:44.451772 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:44 crc kubenswrapper[4985]: I0127 08:55:44.451799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:44 crc kubenswrapper[4985]: I0127 08:55:44.451962 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:44 crc kubenswrapper[4985]: E0127 08:55:44.452731 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:44 crc kubenswrapper[4985]: E0127 08:55:44.452596 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:44 crc kubenswrapper[4985]: E0127 08:55:44.452854 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:44 crc kubenswrapper[4985]: E0127 08:55:44.452362 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:46 crc kubenswrapper[4985]: I0127 08:55:46.451824 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:46 crc kubenswrapper[4985]: I0127 08:55:46.451925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:46 crc kubenswrapper[4985]: I0127 08:55:46.451945 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:46 crc kubenswrapper[4985]: I0127 08:55:46.452027 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:46 crc kubenswrapper[4985]: E0127 08:55:46.453330 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:46 crc kubenswrapper[4985]: E0127 08:55:46.453388 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:46 crc kubenswrapper[4985]: I0127 08:55:46.453462 4985 scope.go:117] "RemoveContainer" containerID="611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0" Jan 27 08:55:46 crc kubenswrapper[4985]: E0127 08:55:46.453491 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:46 crc kubenswrapper[4985]: E0127 08:55:46.453599 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:46 crc kubenswrapper[4985]: E0127 08:55:46.552828 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:55:47 crc kubenswrapper[4985]: I0127 08:55:47.452333 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 08:55:48 crc kubenswrapper[4985]: I0127 08:55:48.451166 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:48 crc kubenswrapper[4985]: I0127 08:55:48.451208 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:48 crc kubenswrapper[4985]: E0127 08:55:48.451445 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:48 crc kubenswrapper[4985]: I0127 08:55:48.451480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:48 crc kubenswrapper[4985]: I0127 08:55:48.451594 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:48 crc kubenswrapper[4985]: E0127 08:55:48.451683 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:48 crc kubenswrapper[4985]: E0127 08:55:48.451812 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:48 crc kubenswrapper[4985]: E0127 08:55:48.451901 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:50 crc kubenswrapper[4985]: I0127 08:55:50.451149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:50 crc kubenswrapper[4985]: I0127 08:55:50.451283 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:50 crc kubenswrapper[4985]: E0127 08:55:50.451411 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:50 crc kubenswrapper[4985]: I0127 08:55:50.451501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:50 crc kubenswrapper[4985]: I0127 08:55:50.451166 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:50 crc kubenswrapper[4985]: E0127 08:55:50.451620 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:50 crc kubenswrapper[4985]: E0127 08:55:50.451725 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:50 crc kubenswrapper[4985]: E0127 08:55:50.452006 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:51 crc kubenswrapper[4985]: E0127 08:55:51.554072 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:55:52 crc kubenswrapper[4985]: I0127 08:55:52.451562 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:52 crc kubenswrapper[4985]: I0127 08:55:52.451576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:52 crc kubenswrapper[4985]: I0127 08:55:52.451743 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:52 crc kubenswrapper[4985]: E0127 08:55:52.451765 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:52 crc kubenswrapper[4985]: I0127 08:55:52.451582 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:52 crc kubenswrapper[4985]: E0127 08:55:52.451890 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:52 crc kubenswrapper[4985]: E0127 08:55:52.451948 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:52 crc kubenswrapper[4985]: E0127 08:55:52.452027 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.220340 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/3.log" Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.223349 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerStarted","Data":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.225064 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.226855 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/1.log" Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.226922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerStarted","Data":"2c6cceff4e44e436e1673ebf66431dd57c0d8f5b1ddc8c7a757ef3148da0526a"} Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.284474 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podStartSLOduration=111.284448754 podStartE2EDuration="1m51.284448754s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:55:53.262227689 +0000 UTC m=+137.553322540" watchObservedRunningTime="2026-01-27 08:55:53.284448754 +0000 UTC m=+137.575543595" Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.886001 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cscdv"] Jan 27 08:55:53 crc kubenswrapper[4985]: I0127 08:55:53.886133 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:53 crc kubenswrapper[4985]: E0127 08:55:53.886239 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:54 crc kubenswrapper[4985]: I0127 08:55:54.451827 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:54 crc kubenswrapper[4985]: I0127 08:55:54.451897 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:54 crc kubenswrapper[4985]: E0127 08:55:54.452006 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:54 crc kubenswrapper[4985]: E0127 08:55:54.452064 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:54 crc kubenswrapper[4985]: I0127 08:55:54.452355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:54 crc kubenswrapper[4985]: E0127 08:55:54.452448 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:55 crc kubenswrapper[4985]: I0127 08:55:55.451887 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:55 crc kubenswrapper[4985]: E0127 08:55:55.452088 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cscdv" podUID="5c870945-eecc-4954-a91b-d02cef8f98e2" Jan 27 08:55:56 crc kubenswrapper[4985]: I0127 08:55:56.451823 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:56 crc kubenswrapper[4985]: I0127 08:55:56.451911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:56 crc kubenswrapper[4985]: I0127 08:55:56.451911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:56 crc kubenswrapper[4985]: E0127 08:55:56.453809 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 08:55:56 crc kubenswrapper[4985]: E0127 08:55:56.454158 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 08:55:56 crc kubenswrapper[4985]: E0127 08:55:56.454383 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 08:55:57 crc kubenswrapper[4985]: I0127 08:55:57.452019 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:55:57 crc kubenswrapper[4985]: I0127 08:55:57.455915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 08:55:57 crc kubenswrapper[4985]: I0127 08:55:57.457381 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.451393 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.451425 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.451446 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.454347 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.456454 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.456565 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 08:55:58 crc kubenswrapper[4985]: I0127 08:55:58.456792 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.464529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:04 crc kubenswrapper[4985]: E0127 08:56:04.464656 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:58:06.464625296 +0000 UTC m=+270.755720137 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.465239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.465289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.468725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.473279 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.496242 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.569905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.569951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.575158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.575836 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.779443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 08:56:04 crc kubenswrapper[4985]: I0127 08:56:04.789910 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:56:05 crc kubenswrapper[4985]: W0127 08:56:05.016971 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-419aa2c82eabdb14f4659721253083cc9699472f6d867e4197ae00142dcbbb2c WatchSource:0}: Error finding container 419aa2c82eabdb14f4659721253083cc9699472f6d867e4197ae00142dcbbb2c: Status 404 returned error can't find the container with id 419aa2c82eabdb14f4659721253083cc9699472f6d867e4197ae00142dcbbb2c Jan 27 08:56:05 crc kubenswrapper[4985]: W0127 08:56:05.042400 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-93fa6931be0d829c39ecf9848040d908156f654c06e75f100ed0ba252771a5ae WatchSource:0}: Error finding container 93fa6931be0d829c39ecf9848040d908156f654c06e75f100ed0ba252771a5ae: Status 404 returned error can't find the container with id 93fa6931be0d829c39ecf9848040d908156f654c06e75f100ed0ba252771a5ae Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.276688 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cc0e9f6ad6270091835786b7ac7f03c00a9c9bbc113917354854f1a765a3b0da"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.277234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"419aa2c82eabdb14f4659721253083cc9699472f6d867e4197ae00142dcbbb2c"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.278623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a63d2d4a2779e1ad14f4d367e772e246146ab47fb3951536cb335c279d6dba3a"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.278682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"93fa6931be0d829c39ecf9848040d908156f654c06e75f100ed0ba252771a5ae"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.278929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.282346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0b3043dce64a27a3fef240cc2d7b73a599fe3d9e6bf03001a767cf3a01f41fa3"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.282381 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"480fd5fbafa947621123fe9530a3b7faf9473da294bd8815c890b7eb9b8ade8b"} Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.626193 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.693375 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bskcz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.694231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.695630 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.696158 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.697778 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.698672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.717970 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.718495 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.720706 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.721146 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.724489 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.724779 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.725645 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.725828 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726196 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726364 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726482 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726567 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726693 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726708 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.726931 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vl84l"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.727000 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.727224 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.727538 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.727934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.730074 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.730741 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.732555 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.732842 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733018 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733070 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733232 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733435 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733466 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.733531 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.737400 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.737962 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.738330 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.738602 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.739278 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.739840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.740476 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.740556 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.740710 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.740780 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.741093 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.746273 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.753172 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.754668 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.755172 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.755777 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.756364 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.756726 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.783983 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.785808 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.786042 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.786208 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.786746 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5q47j"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.786802 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.786838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.787254 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.787320 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.787569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.788669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.788897 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790095 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790242 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790258 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790351 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790448 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790632 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790755 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.790860 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.791044 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.792943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-encryption-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.792994 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9fac-a804-4afa-a69a-1abcd4e81281-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793059 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdprf\" (UniqueName: \"kubernetes.io/projected/79f74589-abcf-4b67-815f-cfc142a9413f-kube-api-access-qdprf\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793077 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793094 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793124 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-audit\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793176 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-serving-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793194 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793200 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793242 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-config\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793273 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cczgm\" (UniqueName: \"kubernetes.io/projected/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-kube-api-access-cczgm\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793310 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-auth-proxy-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793325 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgm4\" (UniqueName: \"kubernetes.io/projected/a9f39981-0c5b-4358-a7f7-41165d56405b-kube-api-access-wfgm4\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793358 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793404 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-node-pullsecrets\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793420 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-audit-dir\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shb5f\" (UniqueName: \"kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793445 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793454 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtnrw\" (UniqueName: \"kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793586 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-image-import-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793617 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-serving-cert\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793635 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qx7rg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794058 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.793641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794194 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54sjj\" (UniqueName: \"kubernetes.io/projected/019bf0d4-de52-4a7b-b950-4da2766cea13-kube-api-access-54sjj\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794240 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794256 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmv58\" (UniqueName: \"kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794274 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvxl\" (UniqueName: \"kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794300 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794319 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794337 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794355 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019bf0d4-de52-4a7b-b950-4da2766cea13-serving-cert\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-config\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794393 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9f39981-0c5b-4358-a7f7-41165d56405b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794415 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77a9fac-a804-4afa-a69a-1abcd4e81281-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794431 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794454 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79f74589-abcf-4b67-815f-cfc142a9413f-machine-approver-tls\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794474 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-images\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.794495 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.797106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.797371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.801653 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.801839 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802341 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.801854 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.801981 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802409 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802582 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802634 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802660 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.802828 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bqdg\" (UniqueName: \"kubernetes.io/projected/6cf28995-1608-4130-9284-e3d638c4cf25-kube-api-access-9bqdg\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803700 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803734 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-trusted-ca\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-client\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803810 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803889 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803957 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxztc\" (UniqueName: \"kubernetes.io/projected/e77a9fac-a804-4afa-a69a-1abcd4e81281-kube-api-access-hxztc\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.803975 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.804343 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.805641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.805964 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.807059 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.808995 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-49dbl"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.809157 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.809335 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bqt6d"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.809608 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.809960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.810268 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.810448 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.810866 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.811424 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.811469 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.811602 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.811632 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.811718 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.836819 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.839338 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.845241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.858273 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw5fd"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.858960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.859172 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.859503 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.859633 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.859760 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.859936 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.860124 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.860383 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.860435 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861055 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861428 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861567 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861775 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861912 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.861934 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862008 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862099 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862152 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862203 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862223 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862276 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862295 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862353 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.863391 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862478 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862709 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.862869 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.863007 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.864107 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.864690 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.864996 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.865346 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.866266 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.867725 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.867746 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.867982 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.868118 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wg68v"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.868574 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.868764 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.868806 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.870171 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.870990 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.871207 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.871418 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.876890 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.877447 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.877456 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.877670 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.879311 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.880528 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.882209 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.882668 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp57q"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.882853 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.882946 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.883176 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.883249 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.883285 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.883438 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.883636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.885598 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-89tzz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.887687 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.887843 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.888098 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.888493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.890663 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8j28"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.891277 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.891873 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.892456 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.894180 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.894642 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.898362 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.899963 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.900956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-encryption-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6hd\" (UniqueName: \"kubernetes.io/projected/ba2694c4-c10f-42d7-96f5-4b47a4206710-kube-api-access-sz6hd\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906502 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906546 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9fac-a804-4afa-a69a-1abcd4e81281-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdprf\" (UniqueName: \"kubernetes.io/projected/79f74589-abcf-4b67-815f-cfc142a9413f-kube-api-access-qdprf\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchlk\" (UniqueName: \"kubernetes.io/projected/2e510d6e-846d-4099-bc1d-d55a75969151-kube-api-access-mchlk\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906639 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx8q9\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-kube-api-access-rx8q9\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906696 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906752 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e510d6e-846d-4099-bc1d-d55a75969151-metrics-tls\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5951e740-6404-42a5-867e-892f4234e62e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906895 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-audit\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-serving-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.906993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-config\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907082 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907113 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5951e740-6404-42a5-867e-892f4234e62e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907182 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-profile-collector-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968pt\" (UniqueName: \"kubernetes.io/projected/aebec426-a442-4b90-ad31-46b5e14c0aa1-kube-api-access-968pt\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907266 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstzm\" (UniqueName: \"kubernetes.io/projected/6064471e-2f00-4499-b351-c1d205c81ba7-kube-api-access-rstzm\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907303 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cczgm\" (UniqueName: \"kubernetes.io/projected/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-kube-api-access-cczgm\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-auth-proxy-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907375 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ce21e1c-0ee3-4e71-8b52-be876c32121d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdcl\" (UniqueName: \"kubernetes.io/projected/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-kube-api-access-pzdcl\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a63b2ad-84b3-4476-b253-73410ba0fed1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907468 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7s4\" (UniqueName: \"kubernetes.io/projected/d6088e48-728e-4a96-b305-c7f86d9fe9f4-kube-api-access-cz7s4\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907496 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqssz\" (UniqueName: \"kubernetes.io/projected/8bab8050-633d-4949-9cc6-c85351f2641d-kube-api-access-wqssz\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907577 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-serving-cert\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22c1a5b1-eda2-4098-9345-2742a61e8b20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgm4\" (UniqueName: \"kubernetes.io/projected/a9f39981-0c5b-4358-a7f7-41165d56405b-kube-api-access-wfgm4\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907670 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907954 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.907994 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrf5\" (UniqueName: \"kubernetes.io/projected/4a63b2ad-84b3-4476-b253-73410ba0fed1-kube-api-access-bhrf5\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908059 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-cabundle\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908085 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-dir\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shb5f\" (UniqueName: \"kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908203 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtnrw\" (UniqueName: \"kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9bd\" (UniqueName: \"kubernetes.io/projected/f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1-kube-api-access-gc9bd\") pod \"downloads-7954f5f757-qx7rg\" (UID: \"f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1\") " pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908295 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-node-pullsecrets\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-audit-dir\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908354 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-image-import-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-serving-cert\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908408 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-default-certificate\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908494 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31009973-dd0c-43a1-8a33-4a7aba2a74da-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54sjj\" (UniqueName: \"kubernetes.io/projected/019bf0d4-de52-4a7b-b950-4da2766cea13-kube-api-access-54sjj\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d918baa-44fd-4067-8e83-5da61aedf201-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-stats-auth\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908733 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmv58\" (UniqueName: \"kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvxl\" (UniqueName: \"kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908792 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-service-ca-bundle\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908884 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.908966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6064471e-2f00-4499-b351-c1d205c81ba7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909087 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019bf0d4-de52-4a7b-b950-4da2766cea13-serving-cert\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-config\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909142 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-key\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909302 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31009973-dd0c-43a1-8a33-4a7aba2a74da-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909360 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9f39981-0c5b-4358-a7f7-41165d56405b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909491 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce21e1c-0ee3-4e71-8b52-be876c32121d-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-metrics-certs\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909613 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909755 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-policies\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77a9fac-a804-4afa-a69a-1abcd4e81281-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-config\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79f74589-abcf-4b67-815f-cfc142a9413f-machine-approver-tls\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.909964 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910086 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910116 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscfk\" (UniqueName: \"kubernetes.io/projected/d51bccbd-3298-4050-b817-450afdcd31ea-kube-api-access-nscfk\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-images\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910226 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d918baa-44fd-4067-8e83-5da61aedf201-proxy-tls\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt754\" (UniqueName: \"kubernetes.io/projected/0ce21e1c-0ee3-4e71-8b52-be876c32121d-kube-api-access-tt754\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.910376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-encryption-config\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn5kc\" (UniqueName: \"kubernetes.io/projected/5df5265a-d186-4cf0-8e03-e96b84f62a30-kube-api-access-sn5kc\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911775 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bqdg\" (UniqueName: \"kubernetes.io/projected/6cf28995-1608-4130-9284-e3d638c4cf25-kube-api-access-9bqdg\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911808 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7vx\" (UniqueName: \"kubernetes.io/projected/7d918baa-44fd-4067-8e83-5da61aedf201-kube-api-access-5d7vx\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.911987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912023 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912049 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-client\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912111 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-trusted-ca\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912143 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cp8\" (UniqueName: \"kubernetes.io/projected/7d5439e4-b788-4576-9108-32f6889511dc-kube-api-access-55cp8\") pod \"migrator-59844c95c7-b62st\" (UID: \"7d5439e4-b788-4576-9108-32f6889511dc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912197 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2nb\" (UniqueName: \"kubernetes.io/projected/31009973-dd0c-43a1-8a33-4a7aba2a74da-kube-api-access-sb2nb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-client\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912293 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxztc\" (UniqueName: \"kubernetes.io/projected/e77a9fac-a804-4afa-a69a-1abcd4e81281-kube-api-access-hxztc\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.912356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.915019 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.915071 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5q47j"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.916213 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qx7rg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.918914 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.919174 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.923331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-audit-dir\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.928458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.929138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.929471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.929958 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.931311 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.932922 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.935107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9fac-a804-4afa-a69a-1abcd4e81281-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.935734 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-config\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.936039 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.936703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.937164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77a9fac-a804-4afa-a69a-1abcd4e81281-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.937270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-config\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.937435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.938216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.938784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-auth-proxy-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.940361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6cf28995-1608-4130-9284-e3d638c4cf25-node-pullsecrets\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.940960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.941880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f74589-abcf-4b67-815f-cfc142a9413f-config\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.942377 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.942986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/019bf0d4-de52-4a7b-b950-4da2766cea13-trusted-ca\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.943121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9f39981-0c5b-4358-a7f7-41165d56405b-images\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.943262 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.943775 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019bf0d4-de52-4a7b-b950-4da2766cea13-serving-cert\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.943916 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.944718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.946169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-serving-cert\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.946545 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-client\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.946746 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bskcz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.946832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.947227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79f74589-abcf-4b67-815f-cfc142a9413f-machine-approver-tls\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.947845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.947881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948265 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948472 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948608 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948687 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.948820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.949118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.949580 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.949674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-audit\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.949772 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6cf28995-1608-4130-9284-e3d638c4cf25-encryption-config\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.949819 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950031 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-etcd-serving-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950056 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950320 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.950375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6cf28995-1608-4130-9284-e3d638c4cf25-image-import-ca\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.951394 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vl84l"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.951611 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.952324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9f39981-0c5b-4358-a7f7-41165d56405b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.952457 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-49dbl"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.952791 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.953728 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.954864 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw5fd"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.956731 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.958213 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.959256 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.960056 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.960347 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jxkz4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.961200 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.961208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.961441 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vmpf5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.962609 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.962913 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.964223 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.965894 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.967486 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.968753 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.971030 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.972300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.973455 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.976003 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.976533 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.978271 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.982533 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.984888 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bqt6d"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.988590 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vmpf5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.989691 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.990422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.991952 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp57q"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.992456 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-89tzz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.993661 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.994817 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.995686 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8j28"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.997088 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jxkz4"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.997583 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.998417 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz"] Jan 27 08:56:05 crc kubenswrapper[4985]: I0127 08:56:05.999795 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.001175 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-slj2d"] Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.002271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.002297 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tm467"] Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.003014 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.003595 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-slj2d"] Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015698 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-service-ca-bundle\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015765 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6064471e-2f00-4499-b351-c1d205c81ba7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015820 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-key\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015848 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce21e1c-0ee3-4e71-8b52-be876c32121d-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31009973-dd0c-43a1-8a33-4a7aba2a74da-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015892 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015916 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-metrics-certs\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015957 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-policies\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.015981 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-config\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016011 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nscfk\" (UniqueName: \"kubernetes.io/projected/d51bccbd-3298-4050-b817-450afdcd31ea-kube-api-access-nscfk\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016038 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016065 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt754\" (UniqueName: \"kubernetes.io/projected/0ce21e1c-0ee3-4e71-8b52-be876c32121d-kube-api-access-tt754\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-encryption-config\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn5kc\" (UniqueName: \"kubernetes.io/projected/5df5265a-d186-4cf0-8e03-e96b84f62a30-kube-api-access-sn5kc\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d918baa-44fd-4067-8e83-5da61aedf201-proxy-tls\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016175 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.016199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017088 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d7vx\" (UniqueName: \"kubernetes.io/projected/7d918baa-44fd-4067-8e83-5da61aedf201-kube-api-access-5d7vx\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017332 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-policies\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cp8\" (UniqueName: \"kubernetes.io/projected/7d5439e4-b788-4576-9108-32f6889511dc-kube-api-access-55cp8\") pod \"migrator-59844c95c7-b62st\" (UID: \"7d5439e4-b788-4576-9108-32f6889511dc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2nb\" (UniqueName: \"kubernetes.io/projected/31009973-dd0c-43a1-8a33-4a7aba2a74da-kube-api-access-sb2nb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-client\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6hd\" (UniqueName: \"kubernetes.io/projected/ba2694c4-c10f-42d7-96f5-4b47a4206710-kube-api-access-sz6hd\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017742 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchlk\" (UniqueName: \"kubernetes.io/projected/2e510d6e-846d-4099-bc1d-d55a75969151-kube-api-access-mchlk\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017797 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx8q9\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-kube-api-access-rx8q9\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e510d6e-846d-4099-bc1d-d55a75969151-metrics-tls\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017866 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5951e740-6404-42a5-867e-892f4234e62e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.017982 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5951e740-6404-42a5-867e-892f4234e62e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.018013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-profile-collector-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.018074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968pt\" (UniqueName: \"kubernetes.io/projected/aebec426-a442-4b90-ad31-46b5e14c0aa1-kube-api-access-968pt\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.018127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rstzm\" (UniqueName: \"kubernetes.io/projected/6064471e-2f00-4499-b351-c1d205c81ba7-kube-api-access-rstzm\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.018291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ce21e1c-0ee3-4e71-8b52-be876c32121d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.018584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df5265a-d186-4cf0-8e03-e96b84f62a30-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.019062 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ce21e1c-0ee3-4e71-8b52-be876c32121d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.019176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdcl\" (UniqueName: \"kubernetes.io/projected/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-kube-api-access-pzdcl\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.019610 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5951e740-6404-42a5-867e-892f4234e62e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.020012 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a63b2ad-84b3-4476-b253-73410ba0fed1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.020099 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz7s4\" (UniqueName: \"kubernetes.io/projected/d6088e48-728e-4a96-b305-c7f86d9fe9f4-kube-api-access-cz7s4\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.020318 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-etcd-client\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.020330 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-encryption-config\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqssz\" (UniqueName: \"kubernetes.io/projected/8bab8050-633d-4949-9cc6-c85351f2641d-kube-api-access-wqssz\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021679 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22c1a5b1-eda2-4098-9345-2742a61e8b20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-serving-cert\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021740 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrf5\" (UniqueName: \"kubernetes.io/projected/4a63b2ad-84b3-4476-b253-73410ba0fed1-kube-api-access-bhrf5\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-cabundle\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021848 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-dir\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9bd\" (UniqueName: \"kubernetes.io/projected/f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1-kube-api-access-gc9bd\") pod \"downloads-7954f5f757-qx7rg\" (UID: \"f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1\") " pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021941 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df5265a-d186-4cf0-8e03-e96b84f62a30-audit-dir\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.021969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-default-certificate\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.022005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.022126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d918baa-44fd-4067-8e83-5da61aedf201-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.022176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-stats-auth\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.022194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e510d6e-846d-4099-bc1d-d55a75969151-metrics-tls\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.022209 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31009973-dd0c-43a1-8a33-4a7aba2a74da-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.023451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d918baa-44fd-4067-8e83-5da61aedf201-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.023753 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5951e740-6404-42a5-867e-892f4234e62e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.024106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df5265a-d186-4cf0-8e03-e96b84f62a30-serving-cert\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.024359 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a63b2ad-84b3-4476-b253-73410ba0fed1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.024797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce21e1c-0ee3-4e71-8b52-be876c32121d-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.037149 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.057802 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.085898 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.098275 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.117879 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.137683 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.156698 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.177257 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.196363 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.220222 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.237587 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.262957 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.276821 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.317487 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.331867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31009973-dd0c-43a1-8a33-4a7aba2a74da-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.337736 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.356571 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.363202 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31009973-dd0c-43a1-8a33-4a7aba2a74da-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.377330 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.396985 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.416388 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.437629 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.457434 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.476850 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.486813 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-stats-auth\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.497216 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.517927 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.531118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-metrics-certs\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.537916 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.548412 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-service-ca-bundle\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.557334 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.576736 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.597176 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.597674 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.616679 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.637476 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.658040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.667556 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-config\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.678038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.687364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-default-certificate\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.698050 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.701662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-profile-collector-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.717381 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.737590 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.757416 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.777218 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.797642 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.816746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.820259 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.836932 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.857548 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.862934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.878147 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.891161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d918baa-44fd-4067-8e83-5da61aedf201-proxy-tls\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.894912 4985 request.go:700] Waited for 1.011747902s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&limit=500&resourceVersion=0 Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.898155 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.917384 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.930346 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6064471e-2f00-4499-b351-c1d205c81ba7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.937046 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.944185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-cabundle\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.958634 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.973868 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba2694c4-c10f-42d7-96f5-4b47a4206710-signing-key\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.977967 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 08:56:06 crc kubenswrapper[4985]: I0127 08:56:06.997285 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.016082 4985 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.016239 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config podName:22c1a5b1-eda2-4098-9345-2742a61e8b20 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.516206619 +0000 UTC m=+151.807301460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config") pod "kube-controller-manager-operator-78b949d7b-sb8w5" (UID: "22c1a5b1-eda2-4098-9345-2742a61e8b20") : failed to sync configmap cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.016730 4985 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.016774 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert podName:22c1a5b1-eda2-4098-9345-2742a61e8b20 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.516763996 +0000 UTC m=+151.807858837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert") pod "kube-controller-manager-operator-78b949d7b-sb8w5" (UID: "22c1a5b1-eda2-4098-9345-2742a61e8b20") : failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.017640 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.017866 4985 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.017920 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert podName:8bab8050-633d-4949-9cc6-c85351f2641d nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.517908957 +0000 UTC m=+151.809003798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert") pod "catalog-operator-68c6474976-lp7z4" (UID: "8bab8050-633d-4949-9cc6-c85351f2641d") : failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.017963 4985 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.017998 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs podName:aebec426-a442-4b90-ad31-46b5e14c0aa1 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.517986949 +0000 UTC m=+151.809081790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs") pod "multus-admission-controller-857f4d67dd-89tzz" (UID: "aebec426-a442-4b90-ad31-46b5e14c0aa1") : failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.019616 4985 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.019767 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls podName:d6088e48-728e-4a96-b305-c7f86d9fe9f4 nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.519734738 +0000 UTC m=+151.810829769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-lcx4s" (UID: "d6088e48-728e-4a96-b305-c7f86d9fe9f4") : failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.022090 4985 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.022195 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert podName:d51bccbd-3298-4050-b817-450afdcd31ea nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.522170765 +0000 UTC m=+151.813265816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert") pod "service-ca-operator-777779d784-l8j28" (UID: "d51bccbd-3298-4050-b817-450afdcd31ea") : failed to sync secret cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.022720 4985 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: E0127 08:56:07.022773 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config podName:d51bccbd-3298-4050-b817-450afdcd31ea nodeName:}" failed. No retries permitted until 2026-01-27 08:56:07.522760911 +0000 UTC m=+151.813856022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config") pod "service-ca-operator-777779d784-l8j28" (UID: "d51bccbd-3298-4050-b817-450afdcd31ea") : failed to sync configmap cache: timed out waiting for the condition Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.036724 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.057702 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.080174 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.098412 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.117776 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.137501 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.157088 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.177841 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.197646 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.217074 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.237170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.257140 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.277362 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.297679 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.317930 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.358452 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.376301 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.397696 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.427453 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.437290 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.457970 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.476378 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.513160 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdprf\" (UniqueName: \"kubernetes.io/projected/79f74589-abcf-4b67-815f-cfc142a9413f-kube-api-access-qdprf\") pod \"machine-approver-56656f9798-lrffm\" (UID: \"79f74589-abcf-4b67-815f-cfc142a9413f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.544886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545084 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545175 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545252 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545300 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.545349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.546562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c1a5b1-eda2-4098-9345-2742a61e8b20-config\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.546707 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51bccbd-3298-4050-b817-450afdcd31ea-config\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.548652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmv58\" (UniqueName: \"kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58\") pod \"console-f9d7485db-q7dv9\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.549762 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c1a5b1-eda2-4098-9345-2742a61e8b20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.549781 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bab8050-633d-4949-9cc6-c85351f2641d-srv-cert\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.550931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aebec426-a442-4b90-ad31-46b5e14c0aa1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.551024 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d51bccbd-3298-4050-b817-450afdcd31ea-serving-cert\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.552060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6088e48-728e-4a96-b305-c7f86d9fe9f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.553798 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54sjj\" (UniqueName: \"kubernetes.io/projected/019bf0d4-de52-4a7b-b950-4da2766cea13-kube-api-access-54sjj\") pod \"console-operator-58897d9998-5q47j\" (UID: \"019bf0d4-de52-4a7b-b950-4da2766cea13\") " pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.575394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bqdg\" (UniqueName: \"kubernetes.io/projected/6cf28995-1608-4130-9284-e3d638c4cf25-kube-api-access-9bqdg\") pod \"apiserver-76f77b778f-vl84l\" (UID: \"6cf28995-1608-4130-9284-e3d638c4cf25\") " pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.593659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cczgm\" (UniqueName: \"kubernetes.io/projected/7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a-kube-api-access-cczgm\") pod \"openshift-apiserver-operator-796bbdcf4f-cjgmx\" (UID: \"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.613901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtnrw\" (UniqueName: \"kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw\") pod \"oauth-openshift-558db77b4-t4tc7\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.633646 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvxl\" (UniqueName: \"kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl\") pod \"controller-manager-879f6c89f-x4cs4\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.661142 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.668639 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxztc\" (UniqueName: \"kubernetes.io/projected/e77a9fac-a804-4afa-a69a-1abcd4e81281-kube-api-access-hxztc\") pod \"openshift-controller-manager-operator-756b6f6bc6-t2jvg\" (UID: \"e77a9fac-a804-4afa-a69a-1abcd4e81281\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:07 crc kubenswrapper[4985]: W0127 08:56:07.678883 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f74589_abcf_4b67_815f_cfc142a9413f.slice/crio-8ffdba8e84beaad51f21c6bc7db81862d8364b2737fe3386ec53b441189cb4aa WatchSource:0}: Error finding container 8ffdba8e84beaad51f21c6bc7db81862d8364b2737fe3386ec53b441189cb4aa: Status 404 returned error can't find the container with id 8ffdba8e84beaad51f21c6bc7db81862d8364b2737fe3386ec53b441189cb4aa Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.682990 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.692489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shb5f\" (UniqueName: \"kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f\") pod \"route-controller-manager-6576b87f9c-jjksr\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.695057 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.697352 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.719805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.719935 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.725618 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgm4\" (UniqueName: \"kubernetes.io/projected/a9f39981-0c5b-4358-a7f7-41165d56405b-kube-api-access-wfgm4\") pod \"machine-api-operator-5694c8668f-bskcz\" (UID: \"a9f39981-0c5b-4358-a7f7-41165d56405b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.738187 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.757394 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.776744 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.777709 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.786395 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.800273 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.819861 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.820975 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.836801 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.840904 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.857730 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.878051 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.890640 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.895272 4985 request.go:700] Waited for 1.892005502s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.903432 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.906549 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.917479 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.939551 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.945134 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx"] Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.990587 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt754\" (UniqueName: \"kubernetes.io/projected/0ce21e1c-0ee3-4e71-8b52-be876c32121d-kube-api-access-tt754\") pod \"openshift-config-operator-7777fb866f-m2wsf\" (UID: \"0ce21e1c-0ee3-4e71-8b52-be876c32121d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.993059 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscfk\" (UniqueName: \"kubernetes.io/projected/d51bccbd-3298-4050-b817-450afdcd31ea-kube-api-access-nscfk\") pod \"service-ca-operator-777779d784-l8j28\" (UID: \"d51bccbd-3298-4050-b817-450afdcd31ea\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:07 crc kubenswrapper[4985]: I0127 08:56:07.998849 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vl84l"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.011082 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.013565 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bdbcc5d-2414-45ec-b1b4-c64f186361bb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bfdjc\" (UID: \"2bdbcc5d-2414-45ec-b1b4-c64f186361bb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.033780 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.038774 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad59ee9_aae9_4af9_bcdd_abfcc3f0f15a.slice/crio-53630fa8250deac023a6aa8ab0d5bd55353d4172dabca637039b2b11934159b2 WatchSource:0}: Error finding container 53630fa8250deac023a6aa8ab0d5bd55353d4172dabca637039b2b11934159b2: Status 404 returned error can't find the container with id 53630fa8250deac023a6aa8ab0d5bd55353d4172dabca637039b2b11934159b2 Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.053562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn5kc\" (UniqueName: \"kubernetes.io/projected/5df5265a-d186-4cf0-8e03-e96b84f62a30-kube-api-access-sn5kc\") pod \"apiserver-7bbb656c7d-6jb4z\" (UID: \"5df5265a-d186-4cf0-8e03-e96b84f62a30\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.067094 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d7vx\" (UniqueName: \"kubernetes.io/projected/7d918baa-44fd-4067-8e83-5da61aedf201-kube-api-access-5d7vx\") pod \"machine-config-controller-84d6567774-j6mc9\" (UID: \"7d918baa-44fd-4067-8e83-5da61aedf201\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.086025 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.096803 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.100839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cp8\" (UniqueName: \"kubernetes.io/projected/7d5439e4-b788-4576-9108-32f6889511dc-kube-api-access-55cp8\") pod \"migrator-59844c95c7-b62st\" (UID: \"7d5439e4-b788-4576-9108-32f6889511dc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.113248 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.121495 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2nb\" (UniqueName: \"kubernetes.io/projected/31009973-dd0c-43a1-8a33-4a7aba2a74da-kube-api-access-sb2nb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rnllt\" (UID: \"31009973-dd0c-43a1-8a33-4a7aba2a74da\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.148720 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.158455 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6hd\" (UniqueName: \"kubernetes.io/projected/ba2694c4-c10f-42d7-96f5-4b47a4206710-kube-api-access-sz6hd\") pod \"service-ca-9c57cc56f-bp57q\" (UID: \"ba2694c4-c10f-42d7-96f5-4b47a4206710\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.165276 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchlk\" (UniqueName: \"kubernetes.io/projected/2e510d6e-846d-4099-bc1d-d55a75969151-kube-api-access-mchlk\") pod \"dns-operator-744455d44c-49dbl\" (UID: \"2e510d6e-846d-4099-bc1d-d55a75969151\") " pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.178147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx8q9\" (UniqueName: \"kubernetes.io/projected/5951e740-6404-42a5-867e-892f4234e62e-kube-api-access-rx8q9\") pod \"cluster-image-registry-operator-dc59b4c8b-42sq5\" (UID: \"5951e740-6404-42a5-867e-892f4234e62e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.186224 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.190408 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5q47j"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.194261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968pt\" (UniqueName: \"kubernetes.io/projected/aebec426-a442-4b90-ad31-46b5e14c0aa1-kube-api-access-968pt\") pod \"multus-admission-controller-857f4d67dd-89tzz\" (UID: \"aebec426-a442-4b90-ad31-46b5e14c0aa1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.207263 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.213367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rstzm\" (UniqueName: \"kubernetes.io/projected/6064471e-2f00-4499-b351-c1d205c81ba7-kube-api-access-rstzm\") pod \"package-server-manager-789f6589d5-n5s9g\" (UID: \"6064471e-2f00-4499-b351-c1d205c81ba7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.245447 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.250316 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdcl\" (UniqueName: \"kubernetes.io/projected/5ba029a9-6adf-4e07-91f7-f0d33ab0cb97-kube-api-access-pzdcl\") pod \"router-default-5444994796-wg68v\" (UID: \"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97\") " pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.254139 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bskcz"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.274301 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.275746 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz7s4\" (UniqueName: \"kubernetes.io/projected/d6088e48-728e-4a96-b305-c7f86d9fe9f4-kube-api-access-cz7s4\") pod \"control-plane-machine-set-operator-78cbb6b69f-lcx4s\" (UID: \"d6088e48-728e-4a96-b305-c7f86d9fe9f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.278355 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.283269 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.287880 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.290159 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqssz\" (UniqueName: \"kubernetes.io/projected/8bab8050-633d-4949-9cc6-c85351f2641d-kube-api-access-wqssz\") pod \"catalog-operator-68c6474976-lp7z4\" (UID: \"8bab8050-633d-4949-9cc6-c85351f2641d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.298367 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.298862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22c1a5b1-eda2-4098-9345-2742a61e8b20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sb8w5\" (UID: \"22c1a5b1-eda2-4098-9345-2742a61e8b20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.300462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" event={"ID":"6cf28995-1608-4130-9284-e3d638c4cf25","Type":"ContainerStarted","Data":"bd4c767be28ced74ee0505ac8f24b4d19aab62af1ffd1091f698160237c8414d"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.307317 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5q47j" event={"ID":"019bf0d4-de52-4a7b-b950-4da2766cea13","Type":"ContainerStarted","Data":"547855f7e45e2edc8121ac05a6be308317ada4bce90622a5e0aa10f9f99e349d"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.308340 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" event={"ID":"e77a9fac-a804-4afa-a69a-1abcd4e81281","Type":"ContainerStarted","Data":"63b4e4e2c427884f4bd598e4a196096d5cf4759c7af87673cb2a098affaa8e7b"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.313846 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.320082 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q7dv9" event={"ID":"5bd4e7de-4244-4c33-90eb-799159106b7b","Type":"ContainerStarted","Data":"7547242e47a9941b7cb9d4cd3bd26d8a4544f5de690d239d819064d87ef5eada"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.323488 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" event={"ID":"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a","Type":"ContainerStarted","Data":"53630fa8250deac023a6aa8ab0d5bd55353d4172dabca637039b2b11934159b2"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.324125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9bd\" (UniqueName: \"kubernetes.io/projected/f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1-kube-api-access-gc9bd\") pod \"downloads-7954f5f757-qx7rg\" (UID: \"f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1\") " pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.324805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.333070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" event={"ID":"79f74589-abcf-4b67-815f-cfc142a9413f","Type":"ContainerStarted","Data":"8ffdba8e84beaad51f21c6bc7db81862d8364b2737fe3386ec53b441189cb4aa"} Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.342840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.343222 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrf5\" (UniqueName: \"kubernetes.io/projected/4a63b2ad-84b3-4476-b253-73410ba0fed1-kube-api-access-bhrf5\") pod \"cluster-samples-operator-665b6dd947-kfclz\" (UID: \"4a63b2ad-84b3-4476-b253-73410ba0fed1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.360102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967e6ec0-0c85-4a3d-abf5-db10daf91f5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p29gl\" (UID: \"967e6ec0-0c85-4a3d-abf5-db10daf91f5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.404569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.418923 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f39981_0c5b_4358_a7f7_41165d56405b.slice/crio-7aa1edd9801e350fbb5f5fceee39bd52e161a021a7587c74bc87f388b520aa6d WatchSource:0}: Error finding container 7aa1edd9801e350fbb5f5fceee39bd52e161a021a7587c74bc87f388b520aa6d: Status 404 returned error can't find the container with id 7aa1edd9801e350fbb5f5fceee39bd52e161a021a7587c74bc87f388b520aa6d Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.421974 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.433418 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.439608 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e3df4d8_af39_4eb4_b2c7_5127144a44a6.slice/crio-1331787492cf06756a2f2fcfeb697f0501794c1b710d6a7ac14550417b8db490 WatchSource:0}: Error finding container 1331787492cf06756a2f2fcfeb697f0501794c1b710d6a7ac14550417b8db490: Status 404 returned error can't find the container with id 1331787492cf06756a2f2fcfeb697f0501794c1b710d6a7ac14550417b8db490 Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.442187 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.446171 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8j28"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463050 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463093 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a92f3da6-d141-4c31-8022-e4a36bcd145a-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463399 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-webhook-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463422 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-srv-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463481 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463553 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-serving-cert\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463595 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463633 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463657 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/30c225f3-39d6-41a3-b650-d5595fdd9ed1-tmpfs\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463791 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bd6k\" (UniqueName: \"kubernetes.io/projected/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-kube-api-access-2bd6k\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7km2p\" (UniqueName: \"kubernetes.io/projected/ce63b125-3068-480c-abac-6ec26072c54a-kube-api-access-7km2p\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463843 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-proxy-tls\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glv5b\" (UniqueName: \"kubernetes.io/projected/a2b8440a-62ad-4172-a76e-c213c1f13873-kube-api-access-glv5b\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.463979 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.464003 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.464026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-config\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.465878 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:08.965860445 +0000 UTC m=+153.256955476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.466970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-images\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.467886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5d8\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-kube-api-access-cm5d8\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.467923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-etcd-client\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.467996 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-apiservice-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.468742 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-service-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.468781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnqm\" (UniqueName: \"kubernetes.io/projected/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-kube-api-access-6vnqm\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.468840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-serving-cert\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.468870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92f3da6-d141-4c31-8022-e4a36bcd145a-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.468916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7x9r\" (UniqueName: \"kubernetes.io/projected/30c225f3-39d6-41a3-b650-d5595fdd9ed1-kube-api-access-z7x9r\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.469421 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgmd\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.469482 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-config\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.469583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-service-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.499435 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.531236 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.550074 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.562044 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.570947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/30c225f3-39d6-41a3-b650-d5595fdd9ed1-tmpfs\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571158 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bd6k\" (UniqueName: \"kubernetes.io/projected/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-kube-api-access-2bd6k\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571247 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571263 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7km2p\" (UniqueName: \"kubernetes.io/projected/ce63b125-3068-480c-abac-6ec26072c54a-kube-api-access-7km2p\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571280 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stpj\" (UniqueName: \"kubernetes.io/projected/66ad0e5a-d916-4781-9a97-84264b86ae79-kube-api-access-7stpj\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.571328 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-node-bootstrap-token\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.571659 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.071632864 +0000 UTC m=+153.362727695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.572020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/30c225f3-39d6-41a3-b650-d5595fdd9ed1-tmpfs\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573185 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-proxy-tls\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573249 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573362 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-plugins-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573481 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glv5b\" (UniqueName: \"kubernetes.io/projected/a2b8440a-62ad-4172-a76e-c213c1f13873-kube-api-access-glv5b\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573574 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dvw\" (UniqueName: \"kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573651 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-registration-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.573683 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.574276 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.074258477 +0000 UTC m=+153.365353498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.574958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.575265 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9xp\" (UniqueName: \"kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.575363 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-config\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.576132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.576347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-images\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.576431 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cd8k\" (UniqueName: \"kubernetes.io/projected/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-kube-api-access-6cd8k\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.576473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-config\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577023 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-etcd-client\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5d8\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-kube-api-access-cm5d8\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-metrics-tls\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577346 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q89t\" (UniqueName: \"kubernetes.io/projected/c74e44d9-cd57-4f32-94a4-60361125ac4d-kube-api-access-9q89t\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-images\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.577732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-socket-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-apiservice-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578430 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c74e44d9-cd57-4f32-94a4-60361125ac4d-cert\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-service-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578617 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-config-volume\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578877 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnqm\" (UniqueName: \"kubernetes.io/projected/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-kube-api-access-6vnqm\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578925 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-serving-cert\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.578956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-csi-data-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.579005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92f3da6-d141-4c31-8022-e4a36bcd145a-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.579063 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7x9r\" (UniqueName: \"kubernetes.io/projected/30c225f3-39d6-41a3-b650-d5595fdd9ed1-kube-api-access-z7x9r\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.579202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgmd\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.580228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-config\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.580851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-service-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.580937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.580983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a92f3da6-d141-4c31-8022-e4a36bcd145a-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.581092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-config\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.581122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.581309 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbd5v\" (UniqueName: \"kubernetes.io/projected/4f08c89f-beef-4b3f-8895-84efb56aeed0-kube-api-access-lbd5v\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.581499 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92f3da6-d141-4c31-8022-e4a36bcd145a-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.581739 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582264 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-webhook-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582357 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-srv-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.582672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.583189 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-service-ca-bundle\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.583263 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-certs\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.583707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-serving-cert\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.585543 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.585963 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-srv-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.586874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.588236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-serving-cert\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.588850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-webhook-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.589050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.589236 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.589320 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-mountpoint-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.589423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.590088 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2b8440a-62ad-4172-a76e-c213c1f13873-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.590542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.590614 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-service-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.591742 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a92f3da6-d141-4c31-8022-e4a36bcd145a-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.593102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce63b125-3068-480c-abac-6ec26072c54a-etcd-ca\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.593655 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce63b125-3068-480c-abac-6ec26072c54a-etcd-client\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.594193 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.594259 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-proxy-tls\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.595008 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-serving-cert\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.596059 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.597699 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c225f3-39d6-41a3-b650-d5595fdd9ed1-apiservice-cert\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.613280 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bd6k\" (UniqueName: \"kubernetes.io/projected/4a53b8e7-8871-4fb5-93bf-1841b4bcf915-kube-api-access-2bd6k\") pod \"authentication-operator-69f744f599-bqt6d\" (UID: \"4a53b8e7-8871-4fb5-93bf-1841b4bcf915\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.640725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7km2p\" (UniqueName: \"kubernetes.io/projected/ce63b125-3068-480c-abac-6ec26072c54a-kube-api-access-7km2p\") pod \"etcd-operator-b45778765-sw5fd\" (UID: \"ce63b125-3068-480c-abac-6ec26072c54a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.653091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glv5b\" (UniqueName: \"kubernetes.io/projected/a2b8440a-62ad-4172-a76e-c213c1f13873-kube-api-access-glv5b\") pod \"olm-operator-6b444d44fb-c86qm\" (UID: \"a2b8440a-62ad-4172-a76e-c213c1f13873\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.684248 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5d8\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-kube-api-access-cm5d8\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692057 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.692308 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.192267494 +0000 UTC m=+153.483362335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692360 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q89t\" (UniqueName: \"kubernetes.io/projected/c74e44d9-cd57-4f32-94a4-60361125ac4d-kube-api-access-9q89t\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692398 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-socket-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692416 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c74e44d9-cd57-4f32-94a4-60361125ac4d-cert\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-config-volume\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692492 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-csi-data-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbd5v\" (UniqueName: \"kubernetes.io/projected/4f08c89f-beef-4b3f-8895-84efb56aeed0-kube-api-access-lbd5v\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692613 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692628 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-certs\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692653 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-mountpoint-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692677 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stpj\" (UniqueName: \"kubernetes.io/projected/66ad0e5a-d916-4781-9a97-84264b86ae79-kube-api-access-7stpj\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692716 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-node-bootstrap-token\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692755 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692774 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-plugins-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5dvw\" (UniqueName: \"kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692816 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-registration-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq9xp\" (UniqueName: \"kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cd8k\" (UniqueName: \"kubernetes.io/projected/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-kube-api-access-6cd8k\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.692880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-metrics-tls\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.694693 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-mountpoint-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.695134 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-csi-data-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.695796 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-config-volume\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.696473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.696828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-registration-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.697226 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-plugins-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.697237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-metrics-tls\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.697759 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66ad0e5a-d916-4781-9a97-84264b86ae79-socket-dir\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.698273 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.698793 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.198762944 +0000 UTC m=+153.489857785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.700435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c74e44d9-cd57-4f32-94a4-60361125ac4d-cert\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.707726 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a92f3da6-d141-4c31-8022-e4a36bcd145a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcgvv\" (UID: \"a92f3da6-d141-4c31-8022-e4a36bcd145a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.708368 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.710168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-certs\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.713716 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4f08c89f-beef-4b3f-8895-84efb56aeed0-node-bootstrap-token\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.714058 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.720774 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd51bccbd_3298_4050_b817_450afdcd31ea.slice/crio-f990ab5604d82ea32b46944e9dcfb42ef9e618e5d6f1dfca3bcb7d83c4edd4e9 WatchSource:0}: Error finding container f990ab5604d82ea32b46944e9dcfb42ef9e618e5d6f1dfca3bcb7d83c4edd4e9: Status 404 returned error can't find the container with id f990ab5604d82ea32b46944e9dcfb42ef9e618e5d6f1dfca3bcb7d83c4edd4e9 Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.732257 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgmd\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.737939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnqm\" (UniqueName: \"kubernetes.io/projected/6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3-kube-api-access-6vnqm\") pod \"machine-config-operator-74547568cd-ggt4m\" (UID: \"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.752485 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.757039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7x9r\" (UniqueName: \"kubernetes.io/projected/30c225f3-39d6-41a3-b650-d5595fdd9ed1-kube-api-access-z7x9r\") pod \"packageserver-d55dfcdfc-qcnmg\" (UID: \"30c225f3-39d6-41a3-b650-d5595fdd9ed1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.768450 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.768981 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba029a9_6adf_4e07_91f7_f0d33ab0cb97.slice/crio-4f852d2ea655604a6d85c1d40598c1647d5c2e6b556ba16e6141213fda18679a WatchSource:0}: Error finding container 4f852d2ea655604a6d85c1d40598c1647d5c2e6b556ba16e6141213fda18679a: Status 404 returned error can't find the container with id 4f852d2ea655604a6d85c1d40598c1647d5c2e6b556ba16e6141213fda18679a Jan 27 08:56:08 crc kubenswrapper[4985]: W0127 08:56:08.773721 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7eca83d_b3cb_484f_9e20_f04ceedd8c99.slice/crio-58eb2ff1a443a9d307b9a69eab07277dcbdd53a36e623c0710ed9219a98c7e16 WatchSource:0}: Error finding container 58eb2ff1a443a9d307b9a69eab07277dcbdd53a36e623c0710ed9219a98c7e16: Status 404 returned error can't find the container with id 58eb2ff1a443a9d307b9a69eab07277dcbdd53a36e623c0710ed9219a98c7e16 Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.782175 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.794426 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.794683 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.294663148 +0000 UTC m=+153.585757989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.797224 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.816877 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.828395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q89t\" (UniqueName: \"kubernetes.io/projected/c74e44d9-cd57-4f32-94a4-60361125ac4d-kube-api-access-9q89t\") pod \"ingress-canary-jxkz4\" (UID: \"c74e44d9-cd57-4f32-94a4-60361125ac4d\") " pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.833393 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stpj\" (UniqueName: \"kubernetes.io/projected/66ad0e5a-d916-4781-9a97-84264b86ae79-kube-api-access-7stpj\") pod \"csi-hostpathplugin-slj2d\" (UID: \"66ad0e5a-d916-4781-9a97-84264b86ae79\") " pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.839799 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.858006 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.863552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbd5v\" (UniqueName: \"kubernetes.io/projected/4f08c89f-beef-4b3f-8895-84efb56aeed0-kube-api-access-lbd5v\") pod \"machine-config-server-tm467\" (UID: \"4f08c89f-beef-4b3f-8895-84efb56aeed0\") " pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.875695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cd8k\" (UniqueName: \"kubernetes.io/projected/2f6334eb-afaa-4d43-a65e-cdbb598bd7cd-kube-api-access-6cd8k\") pod \"dns-default-vmpf5\" (UID: \"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd\") " pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.895873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:08 crc kubenswrapper[4985]: E0127 08:56:08.897112 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.397088205 +0000 UTC m=+153.688183046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.901436 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5dvw\" (UniqueName: \"kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw\") pod \"marketplace-operator-79b997595-cfmwq\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.906380 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.957130 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.957647 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq9xp\" (UniqueName: \"kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp\") pod \"collect-profiles-29491725-mxclz\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.965835 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.968860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.980749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jxkz4" Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.988369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc"] Jan 27 08:56:08 crc kubenswrapper[4985]: I0127 08:56:08.991596 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:08.999409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.000548 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.500524829 +0000 UTC m=+153.791619680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.021048 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.027921 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tm467" Jan 27 08:56:09 crc kubenswrapper[4985]: W0127 08:56:09.057205 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bdbcc5d_2414_45ec_b1b4_c64f186361bb.slice/crio-bfaedbef76e36aaa758e8cfaf23a3f3ca0fea0faeffcf3ca83ad1147790ad0f3 WatchSource:0}: Error finding container bfaedbef76e36aaa758e8cfaf23a3f3ca0fea0faeffcf3ca83ad1147790ad0f3: Status 404 returned error can't find the container with id bfaedbef76e36aaa758e8cfaf23a3f3ca0fea0faeffcf3ca83ad1147790ad0f3 Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.076228 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp57q"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.141534 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.142212 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.64219268 +0000 UTC m=+153.933287531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.244488 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.244824 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.744776791 +0000 UTC m=+154.035871632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.245485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.245942 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.745928502 +0000 UTC m=+154.037023343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.251586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qx7rg"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.284614 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.294657 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.315213 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.346457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.347107 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.847083814 +0000 UTC m=+154.138178645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.348061 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" event={"ID":"c7eca83d-b3cb-484f-9e20-f04ceedd8c99","Type":"ContainerStarted","Data":"58eb2ff1a443a9d307b9a69eab07277dcbdd53a36e623c0710ed9219a98c7e16"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.348210 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-49dbl"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.369798 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" event={"ID":"e77a9fac-a804-4afa-a69a-1abcd4e81281","Type":"ContainerStarted","Data":"06df6247a3e09f915ac93061fd52e615ba2b431d6a12761b351b057e079dc821"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.376969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5q47j" event={"ID":"019bf0d4-de52-4a7b-b950-4da2766cea13","Type":"ContainerStarted","Data":"77739dbef9d94f58cf44f0b540f2f5e6803c7fd8c968b04c20a8510e1364bc0a"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.382844 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.392029 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" event={"ID":"79f74589-abcf-4b67-815f-cfc142a9413f","Type":"ContainerStarted","Data":"219ccbf7dc20d639add6235336bb8090d4d6068e7433b0f524dd6707583666d2"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.408387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" event={"ID":"2bdbcc5d-2414-45ec-b1b4-c64f186361bb","Type":"ContainerStarted","Data":"bfaedbef76e36aaa758e8cfaf23a3f3ca0fea0faeffcf3ca83ad1147790ad0f3"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.409668 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" event={"ID":"ba2694c4-c10f-42d7-96f5-4b47a4206710","Type":"ContainerStarted","Data":"817d31b518b675421548bea5b184c5456cac1ed169ff64b3b9fb6e7ad8e3402e"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.412705 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-5q47j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.412767 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5q47j" podUID="019bf0d4-de52-4a7b-b950-4da2766cea13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.414023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wg68v" event={"ID":"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97","Type":"ContainerStarted","Data":"4f852d2ea655604a6d85c1d40598c1647d5c2e6b556ba16e6141213fda18679a"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.420717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" event={"ID":"5df5265a-d186-4cf0-8e03-e96b84f62a30","Type":"ContainerStarted","Data":"eda282a2889acbc7e62b7b51ddcad264eb22f9093d07bf5c3dda1ebf5c1e53bf"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.422719 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-89tzz"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.425025 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" event={"ID":"d51bccbd-3298-4050-b817-450afdcd31ea","Type":"ContainerStarted","Data":"f990ab5604d82ea32b46944e9dcfb42ef9e618e5d6f1dfca3bcb7d83c4edd4e9"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.430572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" event={"ID":"5e3df4d8-af39-4eb4-b2c7-5127144a44a6","Type":"ContainerStarted","Data":"1331787492cf06756a2f2fcfeb697f0501794c1b710d6a7ac14550417b8db490"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.444149 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.448885 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.450804 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:09.950782335 +0000 UTC m=+154.241877176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.466335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" event={"ID":"7d5439e4-b788-4576-9108-32f6889511dc","Type":"ContainerStarted","Data":"0e5fcd2a69b938cbb62720d14f52cd77805f93bd606ad32ddc3bcd9ebe2f7c87"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.486211 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.522955 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.553594 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.554417 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.054380663 +0000 UTC m=+154.345475504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.558274 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.05825437 +0000 UTC m=+154.349349211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.562683 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" event={"ID":"a9f39981-0c5b-4358-a7f7-41165d56405b","Type":"ContainerStarted","Data":"21c7e89ca2a60d7efa90ec4a51f4c99ae3c12546d97b938aa1a2efa2bd9ef6c0"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.562717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" event={"ID":"a9f39981-0c5b-4358-a7f7-41165d56405b","Type":"ContainerStarted","Data":"7aa1edd9801e350fbb5f5fceee39bd52e161a021a7587c74bc87f388b520aa6d"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.566796 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.576725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q7dv9" event={"ID":"5bd4e7de-4244-4c33-90eb-799159106b7b","Type":"ContainerStarted","Data":"44db6cd45c04a3f8ef36cd7980b452fe87ba2462eb54dc804a87733a68c32c3f"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.584328 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bqt6d"] Jan 27 08:56:09 crc kubenswrapper[4985]: W0127 08:56:09.587027 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaebec426_a442_4b90_ad31_46b5e14c0aa1.slice/crio-45351d5890334fb092dea2957e11512b30358ad4612339f62bd7f8a3b17f86f5 WatchSource:0}: Error finding container 45351d5890334fb092dea2957e11512b30358ad4612339f62bd7f8a3b17f86f5: Status 404 returned error can't find the container with id 45351d5890334fb092dea2957e11512b30358ad4612339f62bd7f8a3b17f86f5 Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.590129 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" event={"ID":"72fd06a7-765f-4f95-89f1-3bd8a0fa466b","Type":"ContainerStarted","Data":"cc6bfb4233eb7543fe122604793466cb7adc42820d39984d7edde7ce33c74e06"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.590193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" event={"ID":"72fd06a7-765f-4f95-89f1-3bd8a0fa466b","Type":"ContainerStarted","Data":"686b436c818b13d055111b8b8964602223193fd08abb34f11752c6411b16066a"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.608404 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw5fd"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.611710 4985 generic.go:334] "Generic (PLEG): container finished" podID="6cf28995-1608-4130-9284-e3d638c4cf25" containerID="15cac5f9c27ca4044047f75ce5cef71b78b99f6d3ee32da7673c0a99200f2133" exitCode=0 Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.614446 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.614503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" event={"ID":"6cf28995-1608-4130-9284-e3d638c4cf25","Type":"ContainerDied","Data":"15cac5f9c27ca4044047f75ce5cef71b78b99f6d3ee32da7673c0a99200f2133"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.619341 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.672902 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.674794 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.174763006 +0000 UTC m=+154.465858037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.694790 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" event={"ID":"0ce21e1c-0ee3-4e71-8b52-be876c32121d","Type":"ContainerStarted","Data":"39e82dda4fe94ca27068d63f9201923dbc3baaa024eecbe8c995579c87dcd232"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.711349 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.730096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" event={"ID":"7ad59ee9-aae9-4af9-bcdd-abfcc3f0f15a","Type":"ContainerStarted","Data":"9b73ae91f24c362ea34afd61513671bb39ed586ec482604a88aaadcdc5009668"} Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.745919 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.774732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.777144 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.27712518 +0000 UTC m=+154.568220111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.802784 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vmpf5"] Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.876590 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.876921 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.376902953 +0000 UTC m=+154.667997794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: I0127 08:56:09.978009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:09 crc kubenswrapper[4985]: E0127 08:56:09.978860 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.478839774 +0000 UTC m=+154.769934615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:09 crc kubenswrapper[4985]: W0127 08:56:09.986412 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f6334eb_afaa_4d43_a65e_cdbb598bd7cd.slice/crio-e8a9bdfc6fcefdd26c8e922fd9773a7a97fc7ab10d275f23d1c97c367498dcd7 WatchSource:0}: Error finding container e8a9bdfc6fcefdd26c8e922fd9773a7a97fc7ab10d275f23d1c97c367498dcd7: Status 404 returned error can't find the container with id e8a9bdfc6fcefdd26c8e922fd9773a7a97fc7ab10d275f23d1c97c367498dcd7 Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.081639 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.082748 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.582724641 +0000 UTC m=+154.873819482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.158630 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.183538 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.184098 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.684074616 +0000 UTC m=+154.975169457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.206707 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jxkz4"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.254324 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.280676 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.283240 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.285415 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.285749 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.785730172 +0000 UTC m=+155.076825003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.329179 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t2jvg" podStartSLOduration=128.329159563 podStartE2EDuration="2m8.329159563s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:10.32758262 +0000 UTC m=+154.618677461" watchObservedRunningTime="2026-01-27 08:56:10.329159563 +0000 UTC m=+154.620254404" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.333580 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-slj2d"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.333615 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz"] Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.387169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.387683 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.887658403 +0000 UTC m=+155.178753244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.410066 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5q47j" podStartSLOduration=128.410041663 podStartE2EDuration="2m8.410041663s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:10.408632183 +0000 UTC m=+154.699727044" watchObservedRunningTime="2026-01-27 08:56:10.410041663 +0000 UTC m=+154.701136504" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.489136 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.489503 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:10.989473782 +0000 UTC m=+155.280568623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: W0127 08:56:10.526314 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c225f3_39d6_41a3_b650_d5595fdd9ed1.slice/crio-89cbf2d34179489ff6dadc5b37e060bfecd13a95b0069a8c5bdbc71dcdeb16da WatchSource:0}: Error finding container 89cbf2d34179489ff6dadc5b37e060bfecd13a95b0069a8c5bdbc71dcdeb16da: Status 404 returned error can't find the container with id 89cbf2d34179489ff6dadc5b37e060bfecd13a95b0069a8c5bdbc71dcdeb16da Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.590328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.590735 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.090720605 +0000 UTC m=+155.381815446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.694142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.694860 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.194839008 +0000 UTC m=+155.485933849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.733403 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-q7dv9" podStartSLOduration=128.733376124 podStartE2EDuration="2m8.733376124s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:10.731422571 +0000 UTC m=+155.022517432" watchObservedRunningTime="2026-01-27 08:56:10.733376124 +0000 UTC m=+155.024470965" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.765897 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" event={"ID":"5e3df4d8-af39-4eb4-b2c7-5127144a44a6","Type":"ContainerStarted","Data":"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.766564 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.770443 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t4tc7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.770557 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.772570 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" event={"ID":"aebec426-a442-4b90-ad31-46b5e14c0aa1","Type":"ContainerStarted","Data":"45351d5890334fb092dea2957e11512b30358ad4612339f62bd7f8a3b17f86f5"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.778618 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" event={"ID":"6064471e-2f00-4499-b351-c1d205c81ba7","Type":"ContainerStarted","Data":"f8ee2c6bfaec7ebf639f4a40f777a7ebbf31bf38d41950ab9cfd2aaf12771d39"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.784548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" event={"ID":"31009973-dd0c-43a1-8a33-4a7aba2a74da","Type":"ContainerStarted","Data":"f16b3dff96a976cbe5a3a50424bf1f32e21dbc8515afb9e0f3703d4764785418"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.785638 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" event={"ID":"d6088e48-728e-4a96-b305-c7f86d9fe9f4","Type":"ContainerStarted","Data":"040eae0ff71b9c2725cd4a932a131b531a22d1da1a2bbbae63d856d22d62940a"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.790486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" event={"ID":"967e6ec0-0c85-4a3d-abf5-db10daf91f5c","Type":"ContainerStarted","Data":"51632cdda1743854112cc757f48148681670f2db11d07ad00d3c7eb09810e9e9"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.792605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" event={"ID":"7d918baa-44fd-4067-8e83-5da61aedf201","Type":"ContainerStarted","Data":"935f64d0fa84ff422b73b249a975e438febd5943d026f7c973fc041697db1ff1"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.795740 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.796268 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.296255016 +0000 UTC m=+155.587349857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.800377 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wg68v" event={"ID":"5ba029a9-6adf-4e07-91f7-f0d33ab0cb97","Type":"ContainerStarted","Data":"7e8d285f41e595f03775e7627d87ac8b14d3ef16e098d9bfc84f8139870db2fe"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.802331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" event={"ID":"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3","Type":"ContainerStarted","Data":"c4eea404241d25a1461fb339ba45ba89d55136d34586b677f3f56be6ae43f79f"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.810387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vmpf5" event={"ID":"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd","Type":"ContainerStarted","Data":"e8a9bdfc6fcefdd26c8e922fd9773a7a97fc7ab10d275f23d1c97c367498dcd7"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.812009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" event={"ID":"22c1a5b1-eda2-4098-9345-2742a61e8b20","Type":"ContainerStarted","Data":"583dd3ab6b0b551e42468b04e4158edc2494a876f24240e8e9c14afdc118eef5"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.824215 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx7rg" event={"ID":"f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1","Type":"ContainerStarted","Data":"5ee9cebffc11a34deed6b2c12825c76daefc7e69fefc27565d4a50a2072e4676"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.826991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" event={"ID":"4a53b8e7-8871-4fb5-93bf-1841b4bcf915","Type":"ContainerStarted","Data":"4db4b00fcb7886fe700323ff6598bdb4fb3e4a9e7bcd1a136f7490aca7e266aa"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.829551 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" event={"ID":"30c225f3-39d6-41a3-b650-d5595fdd9ed1","Type":"ContainerStarted","Data":"89cbf2d34179489ff6dadc5b37e060bfecd13a95b0069a8c5bdbc71dcdeb16da"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.842070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" event={"ID":"8bab8050-633d-4949-9cc6-c85351f2641d","Type":"ContainerStarted","Data":"98f4f742224963bd13ab28666d94f37d4b887e45cd09ae097686ce550b5e5a38"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.846830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" event={"ID":"ce63b125-3068-480c-abac-6ec26072c54a","Type":"ContainerStarted","Data":"102c748d5387f2cff15e7730f013934539d67bea3e060f57fcff84ebe7ca866c"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.849468 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" event={"ID":"5951e740-6404-42a5-867e-892f4234e62e","Type":"ContainerStarted","Data":"7be2e5d7d72a1eea0be7112242356b6252e850ae77195b4e92324097cb7b2470"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.851654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" event={"ID":"2e510d6e-846d-4099-bc1d-d55a75969151","Type":"ContainerStarted","Data":"e1afe6324516a92edfd7f9f669f4fce882e5d9f55523ac6aeefdf3829805b1e5"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.855635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" event={"ID":"79f74589-abcf-4b67-815f-cfc142a9413f","Type":"ContainerStarted","Data":"f82ea48ed72233d1c5f9476cec08dfd0f52ae00392dde170005390f6a3970ef5"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.858720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" event={"ID":"a92f3da6-d141-4c31-8022-e4a36bcd145a","Type":"ContainerStarted","Data":"8dc26d9ccca50e5ce1f75e0f2e6f430aa91c4dbd0adaf14b949b43ea129b9a74"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.868340 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ce21e1c-0ee3-4e71-8b52-be876c32121d" containerID="4b58d263cc07e8191280628a839ab79003450eacd86cd22a15c574d6a852dee2" exitCode=0 Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.869243 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" event={"ID":"0ce21e1c-0ee3-4e71-8b52-be876c32121d","Type":"ContainerDied","Data":"4b58d263cc07e8191280628a839ab79003450eacd86cd22a15c574d6a852dee2"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.886017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" event={"ID":"66ad0e5a-d916-4781-9a97-84264b86ae79","Type":"ContainerStarted","Data":"3dc9b9e9748f42d2a5ce6d82b2bd67ba78e34cecb65c73e710db5b5c304cb1fc"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.896858 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:10 crc kubenswrapper[4985]: E0127 08:56:10.897149 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.397130109 +0000 UTC m=+155.688224950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.903349 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" event={"ID":"c7eca83d-b3cb-484f-9e20-f04ceedd8c99","Type":"ContainerStarted","Data":"9921fadc44fd47ef950dca6f4f1496066074e9fb66b2e127760b82c662fb52f9"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.904528 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.907473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tm467" event={"ID":"4f08c89f-beef-4b3f-8895-84efb56aeed0","Type":"ContainerStarted","Data":"c71b19954943b6994e4b725a7b3788394284d3951d5df272cd8056aa514e0e14"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.909403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" event={"ID":"a2b8440a-62ad-4172-a76e-c213c1f13873","Type":"ContainerStarted","Data":"f9f05a67437c154edfa37fbe18ab778c17ab0a770de1355567b9b1ab63b93c7d"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.911153 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjksr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.911194 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.914346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerStarted","Data":"d48c8cc75ef20d45f4103994bbf0f80d7fe50fc93302fee314a3116198138471"} Jan 27 08:56:10 crc kubenswrapper[4985]: I0127 08:56:10.999278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.000181 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.500162512 +0000 UTC m=+155.791257363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.013295 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podStartSLOduration=129.013272224 podStartE2EDuration="2m9.013272224s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:11.01062341 +0000 UTC m=+155.301718261" watchObservedRunningTime="2026-01-27 08:56:11.013272224 +0000 UTC m=+155.304367075" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.024034 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jxkz4" event={"ID":"c74e44d9-cd57-4f32-94a4-60361125ac4d","Type":"ContainerStarted","Data":"8d9324731dce406d617ebd10bb76baeafc3b994d09b13e182cb2dbd481bc43ec"} Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.029939 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" event={"ID":"a0036b0c-985c-4832-a8a8-0a18b5cc3a52","Type":"ContainerStarted","Data":"f7098e3739d9d7cf1d1b6eb0de9221cbb719ec4b777756ddb9275952d058be8a"} Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.034392 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" event={"ID":"d51bccbd-3298-4050-b817-450afdcd31ea","Type":"ContainerStarted","Data":"b42dfe2bce34be126a9efd3824624523bb2f7bb5cd18621f87ade1ef23bf95ba"} Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.035502 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.035588 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-5q47j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.035617 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5q47j" podUID="019bf0d4-de52-4a7b-b950-4da2766cea13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.041406 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x4cs4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.041467 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.101418 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.102796 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.602777352 +0000 UTC m=+155.893872193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.156569 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cjgmx" podStartSLOduration=129.15654562 podStartE2EDuration="2m9.15654562s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:11.093537006 +0000 UTC m=+155.384631867" watchObservedRunningTime="2026-01-27 08:56:11.15654562 +0000 UTC m=+155.447640471" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.203659 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.204082 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.704063486 +0000 UTC m=+155.995158327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.254228 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" podStartSLOduration=129.254211315 podStartE2EDuration="2m9.254211315s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:11.252657112 +0000 UTC m=+155.543751943" watchObservedRunningTime="2026-01-27 08:56:11.254211315 +0000 UTC m=+155.545306156" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.296789 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podStartSLOduration=128.296767124 podStartE2EDuration="2m8.296767124s" podCreationTimestamp="2026-01-27 08:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:11.29195908 +0000 UTC m=+155.583053941" watchObservedRunningTime="2026-01-27 08:56:11.296767124 +0000 UTC m=+155.587861965" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.304425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.304687 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.804643481 +0000 UTC m=+156.095738322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.304825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.305273 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.805263769 +0000 UTC m=+156.096358600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.333984 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lrffm" podStartSLOduration=129.333958483 podStartE2EDuration="2m9.333958483s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:11.328639886 +0000 UTC m=+155.619734737" watchObservedRunningTime="2026-01-27 08:56:11.333958483 +0000 UTC m=+155.625053324" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.406062 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.406178 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.906157972 +0000 UTC m=+156.197252813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.406597 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.406938 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:11.906930163 +0000 UTC m=+156.198025004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.508092 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.508297 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.008251808 +0000 UTC m=+156.299346649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.508931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.509603 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.009580795 +0000 UTC m=+156.300675646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.619420 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.619608 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.119574771 +0000 UTC m=+156.410669612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.620096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.620500 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.120479765 +0000 UTC m=+156.411574776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.721209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.721631 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.221585395 +0000 UTC m=+156.512680236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.822595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.823448 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.323435695 +0000 UTC m=+156.614530536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.831472 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.831566 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:56:11 crc kubenswrapper[4985]: I0127 08:56:11.924722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:11 crc kubenswrapper[4985]: E0127 08:56:11.925206 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.425181242 +0000 UTC m=+156.716276083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.026914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.027463 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.527433572 +0000 UTC m=+156.818528413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.060768 4985 generic.go:334] "Generic (PLEG): container finished" podID="5df5265a-d186-4cf0-8e03-e96b84f62a30" containerID="05f8e2788bef3f22bd05e4e016808abc42d34f3a3f75e5dd0c12ab973d64c902" exitCode=0 Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.061134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" event={"ID":"5df5265a-d186-4cf0-8e03-e96b84f62a30","Type":"ContainerDied","Data":"05f8e2788bef3f22bd05e4e016808abc42d34f3a3f75e5dd0c12ab973d64c902"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.064971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" event={"ID":"aebec426-a442-4b90-ad31-46b5e14c0aa1","Type":"ContainerStarted","Data":"2290e6e6f03d6dcbeb9a01ce406a059ae37a39618852c836dab4ec25a9c3aad4"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.068591 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" event={"ID":"5951e740-6404-42a5-867e-892f4234e62e","Type":"ContainerStarted","Data":"e4c82fd0a5693654ab49dfe020192494f1d4ff0e98e87fe22bd0c643629d62c3"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.120604 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" event={"ID":"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3","Type":"ContainerStarted","Data":"02762331dfc0c0ac7b15e97ba47a472d012c52b7058860ea635feefeb6cb08f2"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.120693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" event={"ID":"6f7b20ce-38e9-4fae-a1b6-3746d1a94ea3","Type":"ContainerStarted","Data":"f3fc8847441d971a056722cdb1ca6be5d5b5c5ebf15c5ad7ba4fab12ae543d7f"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.128382 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.128680 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.628640764 +0000 UTC m=+156.919735625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.128864 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.130378 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.630364212 +0000 UTC m=+156.921459053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.134189 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" event={"ID":"22c1a5b1-eda2-4098-9345-2742a61e8b20","Type":"ContainerStarted","Data":"2aaf50e09bd5142b4904ae34c49c5583e9a56ed47ca23d6f4361c0a84b705bc6"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.150054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-42sq5" podStartSLOduration=130.150031557 podStartE2EDuration="2m10.150031557s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.114572475 +0000 UTC m=+156.405667326" watchObservedRunningTime="2026-01-27 08:56:12.150031557 +0000 UTC m=+156.441126398" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.154773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" event={"ID":"0ce21e1c-0ee3-4e71-8b52-be876c32121d","Type":"ContainerStarted","Data":"b8a2e207aa202d40361320a215fa40e722818e8411d395e3e414460ed483c91d"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.156027 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.174166 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" event={"ID":"6064471e-2f00-4499-b351-c1d205c81ba7","Type":"ContainerStarted","Data":"c40c6af8cc60376a5cf96a525169e5500ee8af559196011a97356f036eb5fac9"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.176289 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" event={"ID":"8bab8050-633d-4949-9cc6-c85351f2641d","Type":"ContainerStarted","Data":"1dff3129002753e8b762aac04b82dbbb507419509cd31b425d1d25f7540bad0a"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.177726 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.180461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" event={"ID":"2bdbcc5d-2414-45ec-b1b4-c64f186361bb","Type":"ContainerStarted","Data":"3ca0e0b6020baedb2828b54fb743e0c10eb45e2fc31f1810928192fe481be508"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.187505 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lp7z4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.187611 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" podUID="8bab8050-633d-4949-9cc6-c85351f2641d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.190758 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ggt4m" podStartSLOduration=130.190733764 podStartE2EDuration="2m10.190733764s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.151000653 +0000 UTC m=+156.442095524" watchObservedRunningTime="2026-01-27 08:56:12.190733764 +0000 UTC m=+156.481828605" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.191424 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sb8w5" podStartSLOduration=130.191419973 podStartE2EDuration="2m10.191419973s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.177914789 +0000 UTC m=+156.469009630" watchObservedRunningTime="2026-01-27 08:56:12.191419973 +0000 UTC m=+156.482514804" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.192885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" event={"ID":"a2b8440a-62ad-4172-a76e-c213c1f13873","Type":"ContainerStarted","Data":"3078818cacb6d24ea1e6de9b4fe9b037ecd9c377827e7612ec8e9bbf8c2bf9db"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.193989 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.202967 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c86qm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.203091 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" podUID="a2b8440a-62ad-4172-a76e-c213c1f13873" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.215642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" event={"ID":"4a53b8e7-8871-4fb5-93bf-1841b4bcf915","Type":"ContainerStarted","Data":"c5f745c22b80d21f7df3471e0aa261e4ea7861a0cc0806449c1da6662580d887"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.222800 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" podStartSLOduration=130.222775481 podStartE2EDuration="2m10.222775481s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.203261161 +0000 UTC m=+156.494356002" watchObservedRunningTime="2026-01-27 08:56:12.222775481 +0000 UTC m=+156.513870322" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.229966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.233657 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.733613211 +0000 UTC m=+157.024708052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.237794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.241124 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.741106809 +0000 UTC m=+157.032201640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.248723 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" event={"ID":"30c225f3-39d6-41a3-b650-d5595fdd9ed1","Type":"ContainerStarted","Data":"7baf80c5c10cf75aff2bdee68292df31d6f50b28113fde70060689021ea715d7"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.249165 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.265947 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qcnmg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.266869 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" podUID="30c225f3-39d6-41a3-b650-d5595fdd9ed1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.267852 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" podStartSLOduration=130.267832938 podStartE2EDuration="2m10.267832938s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.23935885 +0000 UTC m=+156.530453681" watchObservedRunningTime="2026-01-27 08:56:12.267832938 +0000 UTC m=+156.558927779" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.268496 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" event={"ID":"d6088e48-728e-4a96-b305-c7f86d9fe9f4","Type":"ContainerStarted","Data":"370a02cbf2356c58ead22335da70d6d1a4eac571b29cbe52f99a8c27dbf00b19"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.278227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" event={"ID":"a0036b0c-985c-4832-a8a8-0a18b5cc3a52","Type":"ContainerStarted","Data":"354b410a86f88c8585bd149eab4ae0c378499ebcdc0845d0f2a588ba9beaca34"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.280136 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bqt6d" podStartSLOduration=130.280112979 podStartE2EDuration="2m10.280112979s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.272791526 +0000 UTC m=+156.563886367" watchObservedRunningTime="2026-01-27 08:56:12.280112979 +0000 UTC m=+156.571207820" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.281082 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" event={"ID":"7d5439e4-b788-4576-9108-32f6889511dc","Type":"ContainerStarted","Data":"9f5ceb5f1d3df5d4475f0de11921face849e7c61088e1b34c973230891e45c6f"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.281139 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" event={"ID":"7d5439e4-b788-4576-9108-32f6889511dc","Type":"ContainerStarted","Data":"f2dcaf20416644c6f4fd927a9e8cf27ce02e24c85b89d76ab746554162e2c1fe"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.290932 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" event={"ID":"6cf28995-1608-4130-9284-e3d638c4cf25","Type":"ContainerStarted","Data":"37c7c776cb980a9851dcdccd64dba4b934fc1bdbb345067131c8ad966c78ab27"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.306382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jxkz4" event={"ID":"c74e44d9-cd57-4f32-94a4-60361125ac4d","Type":"ContainerStarted","Data":"1f69ec4cd26adc0a8785654980aca386679a2fc7982f8d12059e9323462aec3d"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.308481 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bfdjc" podStartSLOduration=130.308454663 podStartE2EDuration="2m10.308454663s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.297287524 +0000 UTC m=+156.588382365" watchObservedRunningTime="2026-01-27 08:56:12.308454663 +0000 UTC m=+156.599549494" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.324214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vmpf5" event={"ID":"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd","Type":"ContainerStarted","Data":"621b0fd383c226937d192669bf139765fc866fcc5c2900411cba3536c0218ed9"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.328414 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" event={"ID":"2e510d6e-846d-4099-bc1d-d55a75969151","Type":"ContainerStarted","Data":"8d5c8242e8da766302c9c55b99ecbbbfb7e79cc46ed88c9601d33b6289803938"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.340363 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.340853 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.840818919 +0000 UTC m=+157.131913760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.341167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.342776 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" podStartSLOduration=130.342749563 podStartE2EDuration="2m10.342749563s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.324361103 +0000 UTC m=+156.615455944" watchObservedRunningTime="2026-01-27 08:56:12.342749563 +0000 UTC m=+156.633844404" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.343199 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.843190355 +0000 UTC m=+157.134285196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.348815 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lcx4s" podStartSLOduration=130.34879149 podStartE2EDuration="2m10.34879149s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.348283846 +0000 UTC m=+156.639378697" watchObservedRunningTime="2026-01-27 08:56:12.34879149 +0000 UTC m=+156.639886331" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.377400 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jxkz4" podStartSLOduration=7.377377961 podStartE2EDuration="7.377377961s" podCreationTimestamp="2026-01-27 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.374730288 +0000 UTC m=+156.665825119" watchObservedRunningTime="2026-01-27 08:56:12.377377961 +0000 UTC m=+156.668472802" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.377739 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" event={"ID":"a9f39981-0c5b-4358-a7f7-41165d56405b","Type":"ContainerStarted","Data":"a6965d54bfd7453d0d91f20d70be1477883bb10052b345355b72eabda5698e45"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.421331 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b62st" podStartSLOduration=130.421307498 podStartE2EDuration="2m10.421307498s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.420129895 +0000 UTC m=+156.711224736" watchObservedRunningTime="2026-01-27 08:56:12.421307498 +0000 UTC m=+156.712402339" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.442768 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.443681 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:12.943661326 +0000 UTC m=+157.234756157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.449009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" event={"ID":"31009973-dd0c-43a1-8a33-4a7aba2a74da","Type":"ContainerStarted","Data":"39b85e492d9efc6e105672151fb7d2bf1db02176eb4bc27c496dbc48dca39cd1"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.485964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" event={"ID":"967e6ec0-0c85-4a3d-abf5-db10daf91f5c","Type":"ContainerStarted","Data":"753b9aa7feac3d89455cb64146653174a42af0af79f4a2a2502cc556c5030fc5"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.492743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tm467" event={"ID":"4f08c89f-beef-4b3f-8895-84efb56aeed0","Type":"ContainerStarted","Data":"a80b9add1a62dc4e1d2e4a36283cc8d69fdbca9ee95b71a361d82e15ac44aa1f"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.510215 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" event={"ID":"ce63b125-3068-480c-abac-6ec26072c54a","Type":"ContainerStarted","Data":"f190d0455c4a5b40466eab10ef9fce5c98b97399b0177dd34af4e3f3f395353d"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.545826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.550440 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.050421672 +0000 UTC m=+157.341516513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.564133 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx7rg" event={"ID":"f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1","Type":"ContainerStarted","Data":"68a71d555639e8a61569ce6841727e4b09295607b72dcdb561f47764427a08a5"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.564235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.566350 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" podStartSLOduration=130.566329733 podStartE2EDuration="2m10.566329733s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.483203851 +0000 UTC m=+156.774298692" watchObservedRunningTime="2026-01-27 08:56:12.566329733 +0000 UTC m=+156.857424574" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.567028 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" podStartSLOduration=130.567022322 podStartE2EDuration="2m10.567022322s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.563366981 +0000 UTC m=+156.854461822" watchObservedRunningTime="2026-01-27 08:56:12.567022322 +0000 UTC m=+156.858117163" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.572984 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.573058 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.597101 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" event={"ID":"7d918baa-44fd-4067-8e83-5da61aedf201","Type":"ContainerStarted","Data":"0d71a0bdcd7ccf691a967fb2a66e71b15fe934df8f62444822029e1197243ee2"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.625646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" event={"ID":"a92f3da6-d141-4c31-8022-e4a36bcd145a","Type":"ContainerStarted","Data":"b2090a65bb7323a27b50bc3d1ee0c73bc0940a115f999daf41440a35aa440bf3"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.625712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" event={"ID":"a92f3da6-d141-4c31-8022-e4a36bcd145a","Type":"ContainerStarted","Data":"53fa00ee889f465053f06ac95a53f5e0c51157edc60d5a0bb2bafdb9303db2a0"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.646444 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" event={"ID":"ba2694c4-c10f-42d7-96f5-4b47a4206710","Type":"ContainerStarted","Data":"864a6a220dd7dbfed7b36cb832df274a6530e0a338ca359607b8ebcb314f98c2"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.647291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.647874 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tm467" podStartSLOduration=7.6478442399999995 podStartE2EDuration="7.64784424s" podCreationTimestamp="2026-01-27 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.646168873 +0000 UTC m=+156.937263714" watchObservedRunningTime="2026-01-27 08:56:12.64784424 +0000 UTC m=+156.938939081" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.648464 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.148439936 +0000 UTC m=+157.439534777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.662403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" event={"ID":"4a63b2ad-84b3-4476-b253-73410ba0fed1","Type":"ContainerStarted","Data":"5b7f6a7b37b30ca8603f103a72ad6e6803e2abc22c1bc17077f80e0162a85e62"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.662494 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" event={"ID":"4a63b2ad-84b3-4476-b253-73410ba0fed1","Type":"ContainerStarted","Data":"7ee028b1c3052a241ef068292afbe9ce6ef39523c8167b1bfff1f315f2bdc30f"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.669974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerStarted","Data":"400f21cbbef961dd3dccbbb569297622b9284d0f12b21b74315e0e966bfdf9f9"} Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.672430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.673622 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t4tc7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.673663 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x4cs4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.673691 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.673622 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-5q47j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.673765 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5q47j" podUID="019bf0d4-de52-4a7b-b950-4da2766cea13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.674035 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.676788 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cfmwq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.676830 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.712046 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sw5fd" podStartSLOduration=130.712022757 podStartE2EDuration="2m10.712022757s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.707892662 +0000 UTC m=+156.998987503" watchObservedRunningTime="2026-01-27 08:56:12.712022757 +0000 UTC m=+157.003117588" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.742389 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bskcz" podStartSLOduration=130.742369837 podStartE2EDuration="2m10.742369837s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.74030715 +0000 UTC m=+157.031401991" watchObservedRunningTime="2026-01-27 08:56:12.742369837 +0000 UTC m=+157.033464678" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.749857 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.768922 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.268902151 +0000 UTC m=+157.559996992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.860781 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.861284 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.361247578 +0000 UTC m=+157.652342419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.861559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.861970 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.361953427 +0000 UTC m=+157.653048268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.862259 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p29gl" podStartSLOduration=130.862240025 podStartE2EDuration="2m10.862240025s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.860485187 +0000 UTC m=+157.151580028" watchObservedRunningTime="2026-01-27 08:56:12.862240025 +0000 UTC m=+157.153334866" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.862915 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rnllt" podStartSLOduration=130.862906124 podStartE2EDuration="2m10.862906124s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.803851668 +0000 UTC m=+157.094946519" watchObservedRunningTime="2026-01-27 08:56:12.862906124 +0000 UTC m=+157.154000965" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.940563 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" podStartSLOduration=130.940539603 podStartE2EDuration="2m10.940539603s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.938903988 +0000 UTC m=+157.229998839" watchObservedRunningTime="2026-01-27 08:56:12.940539603 +0000 UTC m=+157.231634444" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.940836 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" podStartSLOduration=130.940833371 podStartE2EDuration="2m10.940833371s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.907017825 +0000 UTC m=+157.198112666" watchObservedRunningTime="2026-01-27 08:56:12.940833371 +0000 UTC m=+157.231928212" Jan 27 08:56:12 crc kubenswrapper[4985]: I0127 08:56:12.965214 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:12 crc kubenswrapper[4985]: E0127 08:56:12.965704 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.465685069 +0000 UTC m=+157.756779910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.021999 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wg68v" podStartSLOduration=131.021976878 podStartE2EDuration="2m11.021976878s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:12.986990139 +0000 UTC m=+157.278085000" watchObservedRunningTime="2026-01-27 08:56:13.021976878 +0000 UTC m=+157.313071719" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.048257 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bp57q" podStartSLOduration=130.048237115 podStartE2EDuration="2m10.048237115s" podCreationTimestamp="2026-01-27 08:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.023068418 +0000 UTC m=+157.314163259" watchObservedRunningTime="2026-01-27 08:56:13.048237115 +0000 UTC m=+157.339331956" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.048625 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8j28" podStartSLOduration=130.048621486 podStartE2EDuration="2m10.048621486s" podCreationTimestamp="2026-01-27 08:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.046878157 +0000 UTC m=+157.337972998" watchObservedRunningTime="2026-01-27 08:56:13.048621486 +0000 UTC m=+157.339716327" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.070535 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.070934 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.570919333 +0000 UTC m=+157.862014174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.072135 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qx7rg" podStartSLOduration=131.072113816 podStartE2EDuration="2m11.072113816s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.071604492 +0000 UTC m=+157.362699333" watchObservedRunningTime="2026-01-27 08:56:13.072113816 +0000 UTC m=+157.363208657" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.109169 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcgvv" podStartSLOduration=131.109150701 podStartE2EDuration="2m11.109150701s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.10801584 +0000 UTC m=+157.399110671" watchObservedRunningTime="2026-01-27 08:56:13.109150701 +0000 UTC m=+157.400245542" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.142992 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" podStartSLOduration=131.142974058 podStartE2EDuration="2m11.142974058s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.142639698 +0000 UTC m=+157.433734539" watchObservedRunningTime="2026-01-27 08:56:13.142974058 +0000 UTC m=+157.434068899" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.172079 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.172554 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.672534726 +0000 UTC m=+157.963629567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.266817 4985 csr.go:261] certificate signing request csr-fdklv is approved, waiting to be issued Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.273936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.274493 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.774465998 +0000 UTC m=+158.065560839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.276664 4985 csr.go:257] certificate signing request csr-fdklv is issued Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.296316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.374912 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.375347 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.87532164 +0000 UTC m=+158.166416481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.477161 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.477701 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:13.977675614 +0000 UTC m=+158.268770635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.532221 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.541696 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:13 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:13 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:13 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.541786 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.578330 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.578599 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.078563878 +0000 UTC m=+158.369658719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.578725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.579146 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.079137373 +0000 UTC m=+158.370232214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.679959 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.680158 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.180116129 +0000 UTC m=+158.471210970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.680241 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.680701 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.180682135 +0000 UTC m=+158.471776976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.683013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" event={"ID":"aebec426-a442-4b90-ad31-46b5e14c0aa1","Type":"ContainerStarted","Data":"bfb7e292bf3535c6b34deebf7cbe4dcc6c9178753022e0fb3279a1f9510f93f3"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.688645 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" event={"ID":"6cf28995-1608-4130-9284-e3d638c4cf25","Type":"ContainerStarted","Data":"c80596c455eb1cb92d17cde5c7f9a32aa3fdef57a11410555b7fbd9663d81a19"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.696453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" event={"ID":"66ad0e5a-d916-4781-9a97-84264b86ae79","Type":"ContainerStarted","Data":"f272a584f2839b722eb67039ba1e7e4b5e8bc49cd2b80e3a32da926b228887b2"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.701430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" event={"ID":"2e510d6e-846d-4099-bc1d-d55a75969151","Type":"ContainerStarted","Data":"2d777e56029f03e68b22168dec85c02afb3f0a057a7b5048fcfc998734931a8b"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.703604 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" event={"ID":"5df5265a-d186-4cf0-8e03-e96b84f62a30","Type":"ContainerStarted","Data":"dd2c953cfead47983bdf4d136faa8817afd688cbc094b3f369c94716d61a423f"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.705485 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j6mc9" event={"ID":"7d918baa-44fd-4067-8e83-5da61aedf201","Type":"ContainerStarted","Data":"94a13fbff149ae8923d17ea835d6740487627fc4452e6855cca602b520977cdb"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.712212 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vmpf5" event={"ID":"2f6334eb-afaa-4d43-a65e-cdbb598bd7cd","Type":"ContainerStarted","Data":"cd7e30f42bc1c2c17e25f45501284ac821303ec7dcb21051fe02fe0b40d6b56e"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.712383 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.722160 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfclz" event={"ID":"4a63b2ad-84b3-4476-b253-73410ba0fed1","Type":"ContainerStarted","Data":"c9c5720f5bada2a502176f8613790bb356e4f34de321eeeea37c030f74f16eb5"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.725428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" event={"ID":"6064471e-2f00-4499-b351-c1d205c81ba7","Type":"ContainerStarted","Data":"6d592b2a7f695c1751dd0bb1011212dbe1d6be6e60c1f7704f62519a98072d34"} Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.726714 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cfmwq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.726781 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.726791 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.726844 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.729300 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-89tzz" podStartSLOduration=131.729288201 podStartE2EDuration="2m11.729288201s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.727164652 +0000 UTC m=+158.018259503" watchObservedRunningTime="2026-01-27 08:56:13.729288201 +0000 UTC m=+158.020383042" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.736817 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lp7z4" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.752828 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c86qm" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.781798 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.783708 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.283672336 +0000 UTC m=+158.574767167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.855943 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" podStartSLOduration=131.855913756 podStartE2EDuration="2m11.855913756s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.847141104 +0000 UTC m=+158.138235945" watchObservedRunningTime="2026-01-27 08:56:13.855913756 +0000 UTC m=+158.147008607" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.892040 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:13 crc kubenswrapper[4985]: E0127 08:56:13.893494 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.393481066 +0000 UTC m=+158.684575907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.935475 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" podStartSLOduration=131.935447728 podStartE2EDuration="2m11.935447728s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.908889323 +0000 UTC m=+158.199984164" watchObservedRunningTime="2026-01-27 08:56:13.935447728 +0000 UTC m=+158.226542569" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.938133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-49dbl" podStartSLOduration=131.938119782 podStartE2EDuration="2m11.938119782s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.936925169 +0000 UTC m=+158.228020020" watchObservedRunningTime="2026-01-27 08:56:13.938119782 +0000 UTC m=+158.229214623" Jan 27 08:56:13 crc kubenswrapper[4985]: I0127 08:56:13.997977 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vmpf5" podStartSLOduration=8.997957599 podStartE2EDuration="8.997957599s" podCreationTimestamp="2026-01-27 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:13.989910757 +0000 UTC m=+158.281005608" watchObservedRunningTime="2026-01-27 08:56:13.997957599 +0000 UTC m=+158.289052440" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.000384 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.000478 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.500466169 +0000 UTC m=+158.791561020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.000722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.001032 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.501026204 +0000 UTC m=+158.792121045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.101977 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.102412 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.602395101 +0000 UTC m=+158.893489942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.108012 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2wsf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.108055 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" podUID="0ce21e1c-0ee3-4e71-8b52-be876c32121d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.108336 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2wsf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.108360 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" podUID="0ce21e1c-0ee3-4e71-8b52-be876c32121d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.206431 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.206792 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.70677583 +0000 UTC m=+158.997870671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.278765 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 08:51:13 +0000 UTC, rotation deadline is 2026-12-05 07:06:10.509324027 +0000 UTC Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.278844 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7486h9m56.230483901s for next certificate rotation Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.307954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.308471 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.808446276 +0000 UTC m=+159.099541117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.409614 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.410116 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:14.91010111 +0000 UTC m=+159.201195951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.510616 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.510741 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.010722066 +0000 UTC m=+159.301816907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.511054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.511412 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.011404705 +0000 UTC m=+159.302499546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.538062 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:14 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:14 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:14 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.538137 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.612651 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.612889 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.112850053 +0000 UTC m=+159.403944894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.613392 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.613784 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.113776499 +0000 UTC m=+159.404871330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.715133 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.715620 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.215601528 +0000 UTC m=+159.506696369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.728619 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qcnmg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.728700 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" podUID="30c225f3-39d6-41a3-b650-d5595fdd9ed1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.742611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" event={"ID":"66ad0e5a-d916-4781-9a97-84264b86ae79","Type":"ContainerStarted","Data":"d355a148c9f5a5f750f349bead02335c38565347555678d73f08b7e03ab24478"} Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.748197 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0036b0c-985c-4832-a8a8-0a18b5cc3a52" containerID="354b410a86f88c8585bd149eab4ae0c378499ebcdc0845d0f2a588ba9beaca34" exitCode=0 Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.749874 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cfmwq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.749930 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.749926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" event={"ID":"a0036b0c-985c-4832-a8a8-0a18b5cc3a52","Type":"ContainerDied","Data":"354b410a86f88c8585bd149eab4ae0c378499ebcdc0845d0f2a588ba9beaca34"} Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.750046 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.816430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.817041 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.317019566 +0000 UTC m=+159.608114407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.820533 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" podStartSLOduration=132.820488552 podStartE2EDuration="2m12.820488552s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:14.210239337 +0000 UTC m=+158.501334188" watchObservedRunningTime="2026-01-27 08:56:14.820488552 +0000 UTC m=+159.111583393" Jan 27 08:56:14 crc kubenswrapper[4985]: I0127 08:56:14.922342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:14 crc kubenswrapper[4985]: E0127 08:56:14.922953 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.422929448 +0000 UTC m=+159.714024289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.024345 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.024939 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.524917432 +0000 UTC m=+159.816012273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.126769 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.127057 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.627016429 +0000 UTC m=+159.918111280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.127393 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.127855 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.627843021 +0000 UTC m=+159.918937862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.228611 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.229007 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.728986902 +0000 UTC m=+160.020081743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.229098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.229421 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.729414083 +0000 UTC m=+160.020508924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.329995 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.330192 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.830150482 +0000 UTC m=+160.121245313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.330267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.330645 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.830628645 +0000 UTC m=+160.121723486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.411525 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.412668 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.417177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.426892 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.431610 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.431937 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:15.931919531 +0000 UTC m=+160.223014372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.533644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.533711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.533744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8r5\" (UniqueName: \"kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.533769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.534288 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 08:56:16.034267404 +0000 UTC m=+160.325362245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jn7wk" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.544704 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:15 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:15 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:15 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.544796 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.560328 4985 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.615723 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.617087 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.620736 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.635026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.635546 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.635678 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.635725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8r5\" (UniqueName: \"kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: E0127 08:56:15.636087 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 08:56:16.136028541 +0000 UTC m=+160.427123372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.636295 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.636729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.647560 4985 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T08:56:15.560365786Z","Handler":null,"Name":""} Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.706978 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.710047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8r5\" (UniqueName: \"kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5\") pod \"certified-operators-q7trj\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.717904 4985 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.717962 4985 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.732860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.743555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.743641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.743662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.743688 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8m66\" (UniqueName: \"kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.780708 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.780758 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.813108 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.814187 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.821524 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" event={"ID":"66ad0e5a-d916-4781-9a97-84264b86ae79","Type":"ContainerStarted","Data":"244da71717b9f67c89332a4004636b8ece0aec593827a4066461e0b217833af2"} Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.847885 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.847933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.847966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8m66\" (UniqueName: \"kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.849261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.850202 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.852128 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.905622 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8m66\" (UniqueName: \"kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66\") pod \"community-operators-f2gdx\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.944547 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.952624 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.952688 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhhg\" (UniqueName: \"kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:15 crc kubenswrapper[4985]: I0127 08:56:15.952731 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.019190 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.020230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.038679 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.053712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.053756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhhg\" (UniqueName: \"kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.053781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.054289 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.060884 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.112784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhhg\" (UniqueName: \"kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg\") pod \"certified-operators-q87vc\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.160578 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmrj\" (UniqueName: \"kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.160618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.160638 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.160746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.224423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jn7wk\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.264201 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmrj\" (UniqueName: \"kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.264251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.264278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.265220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.265441 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.320659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmrj\" (UniqueName: \"kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj\") pod \"community-operators-m49st\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.354423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.376256 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.503044 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.547455 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:16 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:16 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:16 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.547554 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.559697 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.586764 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.697263 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume\") pod \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.697589 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume\") pod \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.697675 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq9xp\" (UniqueName: \"kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp\") pod \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\" (UID: \"a0036b0c-985c-4832-a8a8-0a18b5cc3a52\") " Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.703062 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0036b0c-985c-4832-a8a8-0a18b5cc3a52" (UID: "a0036b0c-985c-4832-a8a8-0a18b5cc3a52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.710499 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0036b0c-985c-4832-a8a8-0a18b5cc3a52" (UID: "a0036b0c-985c-4832-a8a8-0a18b5cc3a52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.734390 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp" (OuterVolumeSpecName: "kube-api-access-jq9xp") pod "a0036b0c-985c-4832-a8a8-0a18b5cc3a52" (UID: "a0036b0c-985c-4832-a8a8-0a18b5cc3a52"). InnerVolumeSpecName "kube-api-access-jq9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.745951 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.798790 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.805062 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq9xp\" (UniqueName: \"kubernetes.io/projected/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-kube-api-access-jq9xp\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.805106 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.805116 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0036b0c-985c-4832-a8a8-0a18b5cc3a52-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.852747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.852723 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz" event={"ID":"a0036b0c-985c-4832-a8a8-0a18b5cc3a52","Type":"ContainerDied","Data":"f7098e3739d9d7cf1d1b6eb0de9221cbb719ec4b777756ddb9275952d058be8a"} Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.853349 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7098e3739d9d7cf1d1b6eb0de9221cbb719ec4b777756ddb9275952d058be8a" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.887419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" event={"ID":"66ad0e5a-d916-4781-9a97-84264b86ae79","Type":"ContainerStarted","Data":"ed480ebd46e7b9c49cd7a943b5dc17fe20ad1a2d499565184728782cc9727f5c"} Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.912343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerStarted","Data":"b0b875207b824a6ccf973c251b6a5c2a46f2c961190a9e2e58f672a48747ee52"} Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.931379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerStarted","Data":"594e1ec53817c94591ec6f9cd970228d75fe2d2364a8279e56a660b36bdb52b6"} Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.933843 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-slj2d" podStartSLOduration=11.933812482 podStartE2EDuration="11.933812482s" podCreationTimestamp="2026-01-27 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:16.930108979 +0000 UTC m=+161.221203820" watchObservedRunningTime="2026-01-27 08:56:16.933812482 +0000 UTC m=+161.224907323" Jan 27 08:56:16 crc kubenswrapper[4985]: I0127 08:56:16.989684 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.113099 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2wsf" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.144201 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 08:56:17 crc kubenswrapper[4985]: E0127 08:56:17.144442 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0036b0c-985c-4832-a8a8-0a18b5cc3a52" containerName="collect-profiles" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.144462 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0036b0c-985c-4832-a8a8-0a18b5cc3a52" containerName="collect-profiles" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.150714 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0036b0c-985c-4832-a8a8-0a18b5cc3a52" containerName="collect-profiles" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.163021 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.170482 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.170699 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.174349 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.247804 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.295405 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:56:17 crc kubenswrapper[4985]: W0127 08:56:17.306232 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b2d7f94_92b7_4593_8496_31db09afdf39.slice/crio-633ceb15f583ecca9f85fb8454223b06ceeef28c3151a2672a60b99a2f9b2219 WatchSource:0}: Error finding container 633ceb15f583ecca9f85fb8454223b06ceeef28c3151a2672a60b99a2f9b2219: Status 404 returned error can't find the container with id 633ceb15f583ecca9f85fb8454223b06ceeef28c3151a2672a60b99a2f9b2219 Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.315249 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.315332 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.416873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.416947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.417308 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.441801 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: E0127 08:56:17.442279 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8d30fe_7369_4ea0_830d_b8fffca6bd10.slice/crio-conmon-60dd63881cb2a904c421f2986e908adfc37695e87668b2c5f41226f599ba717c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8d30fe_7369_4ea0_830d_b8fffca6bd10.slice/crio-60dd63881cb2a904c421f2986e908adfc37695e87668b2c5f41226f599ba717c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.537061 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:17 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:17 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:17 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.537171 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.566628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.587774 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.598177 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.612240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.636752 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.690069 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.690105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.724768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jprv\" (UniqueName: \"kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.724771 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vl84l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]log ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]etcd ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/max-in-flight-filter ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 08:56:17 crc kubenswrapper[4985]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-startinformers ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 08:56:17 crc kubenswrapper[4985]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 08:56:17 crc kubenswrapper[4985]: livez check failed Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.724865 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" podUID="6cf28995-1608-4130-9284-e3d638c4cf25" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.724904 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.724998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.757677 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.758406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.761872 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.767501 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.778680 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.778736 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.778823 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.809187 4985 patch_prober.go:28] interesting pod/console-f9d7485db-q7dv9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.809259 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q7dv9" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.809782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5q47j" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.826334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jprv\" (UniqueName: \"kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.826415 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.826441 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.829463 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.829673 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.853148 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.885699 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jprv\" (UniqueName: \"kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv\") pod \"redhat-marketplace-lwlwv\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.906779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.933357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.933433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.951428 4985 generic.go:334] "Generic (PLEG): container finished" podID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerID="a1753b595c13c29f6018ca0a00e2494de26b49c3655a89644f2fd73e54d01e99" exitCode=0 Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.952773 4985 generic.go:334] "Generic (PLEG): container finished" podID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerID="79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299" exitCode=0 Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.953568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerDied","Data":"a1753b595c13c29f6018ca0a00e2494de26b49c3655a89644f2fd73e54d01e99"} Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.953654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerDied","Data":"79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299"} Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.953676 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerStarted","Data":"633ceb15f583ecca9f85fb8454223b06ceeef28c3151a2672a60b99a2f9b2219"} Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.955691 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.976027 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" event={"ID":"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2","Type":"ContainerStarted","Data":"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f"} Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.976094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" event={"ID":"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2","Type":"ContainerStarted","Data":"147791a290959b12556a5be7e5879ed46b7491118051684eb251352edd9eb8a5"} Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.976118 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:17 crc kubenswrapper[4985]: I0127 08:56:17.999315 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.003631 4985 generic.go:334] "Generic (PLEG): container finished" podID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerID="f8080f180d2debc4567066e11f07fa4963a09e2b847ac17ce7d09cffd6ef90f1" exitCode=0 Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.003720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerDied","Data":"f8080f180d2debc4567066e11f07fa4963a09e2b847ac17ce7d09cffd6ef90f1"} Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.022415 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.023734 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.037263 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.037324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.040758 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.056651 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerID="60dd63881cb2a904c421f2986e908adfc37695e87668b2c5f41226f599ba717c" exitCode=0 Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.056789 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerDied","Data":"60dd63881cb2a904c421f2986e908adfc37695e87668b2c5f41226f599ba717c"} Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.056853 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.056881 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerStarted","Data":"4e80fe23d80db08d2fb6f2031e99a2b71ad2247750ef16ea136e83d65cbb6d64"} Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.092456 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" podStartSLOduration=136.092427229 podStartE2EDuration="2m16.092427229s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:18.083107612 +0000 UTC m=+162.374202453" watchObservedRunningTime="2026-01-27 08:56:18.092427229 +0000 UTC m=+162.383522070" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.115272 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.115330 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.124262 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.143300 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.143394 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.143477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlchc\" (UniqueName: \"kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.153795 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.252160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.252224 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlchc\" (UniqueName: \"kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.252341 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.252896 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.253136 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.266784 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.286540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlchc\" (UniqueName: \"kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc\") pod \"redhat-marketplace-bh6j9\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.389569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.399760 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.406124 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.406186 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.406192 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.406238 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.522610 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.534700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.535781 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:18 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:18 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:18 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.535847 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.599906 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.600981 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.608112 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.613155 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.670373 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.671282 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.671325 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.671458 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmdk\" (UniqueName: \"kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.714281 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.772773 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.772835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.772931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmdk\" (UniqueName: \"kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.773961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.774168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.797699 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmdk\" (UniqueName: \"kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk\") pod \"redhat-operators-hwclt\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.913658 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qcnmg" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.936988 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:56:18 crc kubenswrapper[4985]: I0127 08:56:18.968898 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:18.999276 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.005825 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.007905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.022641 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.077652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76fe1a18-7447-42d6-ae78-22060b8c517a","Type":"ContainerStarted","Data":"58d939eb0ff2415ba09c224bbd823c1acf2798c428eb638c771c9292a9f084e5"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.077730 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76fe1a18-7447-42d6-ae78-22060b8c517a","Type":"ContainerStarted","Data":"02d3c195e9d74bd84a9c81fbf6154ee5edbc7d6c7be20d272b1c60b9c98f0b0a"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.079421 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69a120d9-822e-4b48-be12-c181dc06f093","Type":"ContainerStarted","Data":"6f462a4923ff9200c6fae26cfe55d72839b357adecc0fae9152cb4236451604e"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.087134 4985 generic.go:334] "Generic (PLEG): container finished" podID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerID="f92fdacd40ec95bf0dadbca5e186521bb92285e98b0e988bdf117f1ad8f55828" exitCode=0 Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.087245 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerDied","Data":"f92fdacd40ec95bf0dadbca5e186521bb92285e98b0e988bdf117f1ad8f55828"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.087277 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerStarted","Data":"3ce9228b820fc27b3c292a019053f7b697aa2f5bd835f9348e7001701e7fdf77"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.092058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.092372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.092464 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfwz\" (UniqueName: \"kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.096862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.096844769 podStartE2EDuration="2.096844769s" podCreationTimestamp="2026-01-27 08:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:56:19.09653934 +0000 UTC m=+163.387634181" watchObservedRunningTime="2026-01-27 08:56:19.096844769 +0000 UTC m=+163.387939610" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.110241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerStarted","Data":"904d0b49e42cf2c12f533df541e0c4bd67ada101d9720b23c584a58c821d16df"} Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.127809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6jb4z" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.206054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.208399 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.208569 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkfwz\" (UniqueName: \"kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.212165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.213262 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.248254 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkfwz\" (UniqueName: \"kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz\") pod \"redhat-operators-7k25x\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.395783 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.539250 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:19 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:19 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:19 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.539501 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:19 crc kubenswrapper[4985]: I0127 08:56:19.553463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.002919 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.149631 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerID="3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b" exitCode=0 Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.150158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerDied","Data":"3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.152078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerStarted","Data":"3bf5dadf4e1046d1524a5c65b810d649333ea554a7dd6b45b898abfae7c95699"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.159086 4985 generic.go:334] "Generic (PLEG): container finished" podID="76fe1a18-7447-42d6-ae78-22060b8c517a" containerID="58d939eb0ff2415ba09c224bbd823c1acf2798c428eb638c771c9292a9f084e5" exitCode=0 Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.159148 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76fe1a18-7447-42d6-ae78-22060b8c517a","Type":"ContainerDied","Data":"58d939eb0ff2415ba09c224bbd823c1acf2798c428eb638c771c9292a9f084e5"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.173964 4985 generic.go:334] "Generic (PLEG): container finished" podID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerID="6da4f5b0d4edcac410fa7fc05d9a2adf47ff9d2e80f52fd1f9de826b819a8de8" exitCode=0 Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.174391 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerDied","Data":"6da4f5b0d4edcac410fa7fc05d9a2adf47ff9d2e80f52fd1f9de826b819a8de8"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.174441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerStarted","Data":"45775ccc64f8d422b7d95f294db833ce7f6fd3ba0a24dfb4bb3d5db98ea506ec"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.188303 4985 generic.go:334] "Generic (PLEG): container finished" podID="69a120d9-822e-4b48-be12-c181dc06f093" containerID="7009d263993eb025e7fd736410036eeb08165a0cf91481a195a595cc2ce4ae56" exitCode=0 Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.189709 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69a120d9-822e-4b48-be12-c181dc06f093","Type":"ContainerDied","Data":"7009d263993eb025e7fd736410036eeb08165a0cf91481a195a595cc2ce4ae56"} Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.537927 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:20 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:20 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:20 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:20 crc kubenswrapper[4985]: I0127 08:56:20.538034 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.220721 4985 generic.go:334] "Generic (PLEG): container finished" podID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerID="54a67813dfb128f628786a9e568ef62cad0fde383e778857eed15dcd8a2de1be" exitCode=0 Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.220865 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerDied","Data":"54a67813dfb128f628786a9e568ef62cad0fde383e778857eed15dcd8a2de1be"} Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.540887 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:21 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:21 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:21 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.541008 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.565000 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.660743 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir\") pod \"76fe1a18-7447-42d6-ae78-22060b8c517a\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.660860 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access\") pod \"76fe1a18-7447-42d6-ae78-22060b8c517a\" (UID: \"76fe1a18-7447-42d6-ae78-22060b8c517a\") " Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.660843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76fe1a18-7447-42d6-ae78-22060b8c517a" (UID: "76fe1a18-7447-42d6-ae78-22060b8c517a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.661293 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe1a18-7447-42d6-ae78-22060b8c517a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.664037 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.694147 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76fe1a18-7447-42d6-ae78-22060b8c517a" (UID: "76fe1a18-7447-42d6-ae78-22060b8c517a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.763871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access\") pod \"69a120d9-822e-4b48-be12-c181dc06f093\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.763946 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir\") pod \"69a120d9-822e-4b48-be12-c181dc06f093\" (UID: \"69a120d9-822e-4b48-be12-c181dc06f093\") " Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.764293 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe1a18-7447-42d6-ae78-22060b8c517a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.764353 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69a120d9-822e-4b48-be12-c181dc06f093" (UID: "69a120d9-822e-4b48-be12-c181dc06f093"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.768981 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69a120d9-822e-4b48-be12-c181dc06f093" (UID: "69a120d9-822e-4b48-be12-c181dc06f093"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.866325 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a120d9-822e-4b48-be12-c181dc06f093-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:21 crc kubenswrapper[4985]: I0127 08:56:21.866457 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a120d9-822e-4b48-be12-c181dc06f093-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.240223 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69a120d9-822e-4b48-be12-c181dc06f093","Type":"ContainerDied","Data":"6f462a4923ff9200c6fae26cfe55d72839b357adecc0fae9152cb4236451604e"} Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.240273 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f462a4923ff9200c6fae26cfe55d72839b357adecc0fae9152cb4236451604e" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.240272 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.247086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76fe1a18-7447-42d6-ae78-22060b8c517a","Type":"ContainerDied","Data":"02d3c195e9d74bd84a9c81fbf6154ee5edbc7d6c7be20d272b1c60b9c98f0b0a"} Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.247132 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d3c195e9d74bd84a9c81fbf6154ee5edbc7d6c7be20d272b1c60b9c98f0b0a" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.247208 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.540448 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:22 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:22 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:22 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.540540 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.692592 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:22 crc kubenswrapper[4985]: I0127 08:56:22.707494 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vl84l" Jan 27 08:56:23 crc kubenswrapper[4985]: I0127 08:56:23.535610 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:23 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:23 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:23 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:23 crc kubenswrapper[4985]: I0127 08:56:23.535669 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:24 crc kubenswrapper[4985]: I0127 08:56:24.001467 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vmpf5" Jan 27 08:56:24 crc kubenswrapper[4985]: I0127 08:56:24.536440 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:24 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:24 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:24 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:24 crc kubenswrapper[4985]: I0127 08:56:24.536524 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:25 crc kubenswrapper[4985]: I0127 08:56:25.327472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:56:25 crc kubenswrapper[4985]: I0127 08:56:25.336239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c870945-eecc-4954-a91b-d02cef8f98e2-metrics-certs\") pod \"network-metrics-daemon-cscdv\" (UID: \"5c870945-eecc-4954-a91b-d02cef8f98e2\") " pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:56:25 crc kubenswrapper[4985]: I0127 08:56:25.368116 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cscdv" Jan 27 08:56:25 crc kubenswrapper[4985]: I0127 08:56:25.533648 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:25 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:25 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:25 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:25 crc kubenswrapper[4985]: I0127 08:56:25.533949 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:26 crc kubenswrapper[4985]: I0127 08:56:26.548761 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:26 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:26 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:26 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:26 crc kubenswrapper[4985]: I0127 08:56:26.548854 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:27 crc kubenswrapper[4985]: I0127 08:56:27.534699 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:27 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:27 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:27 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:27 crc kubenswrapper[4985]: I0127 08:56:27.534791 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:27 crc kubenswrapper[4985]: I0127 08:56:27.777613 4985 patch_prober.go:28] interesting pod/console-f9d7485db-q7dv9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 08:56:27 crc kubenswrapper[4985]: I0127 08:56:27.777745 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q7dv9" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.406266 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.406781 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.406312 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx7rg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.406898 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx7rg" podUID="f7e5fb60-49e2-4aec-be4d-71f7f0dd4ea1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.535933 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:28 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:28 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:28 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:28 crc kubenswrapper[4985]: I0127 08:56:28.536021 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:29 crc kubenswrapper[4985]: I0127 08:56:29.535847 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:29 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 27 08:56:29 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:29 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:29 crc kubenswrapper[4985]: I0127 08:56:29.535967 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:30 crc kubenswrapper[4985]: I0127 08:56:30.535368 4985 patch_prober.go:28] interesting pod/router-default-5444994796-wg68v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 08:56:30 crc kubenswrapper[4985]: [+]has-synced ok Jan 27 08:56:30 crc kubenswrapper[4985]: [+]process-running ok Jan 27 08:56:30 crc kubenswrapper[4985]: healthz check failed Jan 27 08:56:30 crc kubenswrapper[4985]: I0127 08:56:30.535497 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wg68v" podUID="5ba029a9-6adf-4e07-91f7-f0d33ab0cb97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 08:56:31 crc kubenswrapper[4985]: I0127 08:56:31.535310 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:31 crc kubenswrapper[4985]: I0127 08:56:31.538464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wg68v" Jan 27 08:56:33 crc kubenswrapper[4985]: I0127 08:56:33.897613 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:33 crc kubenswrapper[4985]: I0127 08:56:33.898705 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" containerID="cri-o://cc6bfb4233eb7543fe122604793466cb7adc42820d39984d7edde7ce33c74e06" gracePeriod=30 Jan 27 08:56:33 crc kubenswrapper[4985]: I0127 08:56:33.906118 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:56:33 crc kubenswrapper[4985]: I0127 08:56:33.906411 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" containerID="cri-o://9921fadc44fd47ef950dca6f4f1496066074e9fb66b2e127760b82c662fb52f9" gracePeriod=30 Jan 27 08:56:34 crc kubenswrapper[4985]: I0127 08:56:34.348436 4985 generic.go:334] "Generic (PLEG): container finished" podID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerID="9921fadc44fd47ef950dca6f4f1496066074e9fb66b2e127760b82c662fb52f9" exitCode=0 Jan 27 08:56:34 crc kubenswrapper[4985]: I0127 08:56:34.348548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" event={"ID":"c7eca83d-b3cb-484f-9e20-f04ceedd8c99","Type":"ContainerDied","Data":"9921fadc44fd47ef950dca6f4f1496066074e9fb66b2e127760b82c662fb52f9"} Jan 27 08:56:35 crc kubenswrapper[4985]: I0127 08:56:35.357086 4985 generic.go:334] "Generic (PLEG): container finished" podID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerID="cc6bfb4233eb7543fe122604793466cb7adc42820d39984d7edde7ce33c74e06" exitCode=0 Jan 27 08:56:35 crc kubenswrapper[4985]: I0127 08:56:35.357152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" event={"ID":"72fd06a7-765f-4f95-89f1-3bd8a0fa466b","Type":"ContainerDied","Data":"cc6bfb4233eb7543fe122604793466cb7adc42820d39984d7edde7ce33c74e06"} Jan 27 08:56:36 crc kubenswrapper[4985]: I0127 08:56:36.565891 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:56:37 crc kubenswrapper[4985]: I0127 08:56:37.842799 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x4cs4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 08:56:37 crc kubenswrapper[4985]: I0127 08:56:37.842906 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 08:56:37 crc kubenswrapper[4985]: I0127 08:56:37.904506 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjksr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 08:56:37 crc kubenswrapper[4985]: I0127 08:56:37.904641 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 08:56:38 crc kubenswrapper[4985]: I0127 08:56:38.043498 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:38 crc kubenswrapper[4985]: I0127 08:56:38.050989 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 08:56:38 crc kubenswrapper[4985]: I0127 08:56:38.541652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qx7rg" Jan 27 08:56:41 crc kubenswrapper[4985]: I0127 08:56:41.829050 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:56:41 crc kubenswrapper[4985]: I0127 08:56:41.829174 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:56:44 crc kubenswrapper[4985]: I0127 08:56:44.804043 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 08:56:48 crc kubenswrapper[4985]: I0127 08:56:48.293661 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n5s9g" Jan 27 08:56:48 crc kubenswrapper[4985]: I0127 08:56:48.842669 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x4cs4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:56:48 crc kubenswrapper[4985]: I0127 08:56:48.842807 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:56:48 crc kubenswrapper[4985]: I0127 08:56:48.904426 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjksr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:56:48 crc kubenswrapper[4985]: I0127 08:56:48.904590 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:56:49 crc kubenswrapper[4985]: E0127 08:56:49.730090 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 08:56:49 crc kubenswrapper[4985]: E0127 08:56:49.730634 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jprv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lwlwv_openshift-marketplace(c4ea35ca-a06c-40d2-86c2-d2c0a99da089): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:49 crc kubenswrapper[4985]: E0127 08:56:49.732088 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lwlwv" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" Jan 27 08:56:50 crc kubenswrapper[4985]: E0127 08:56:50.094440 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 08:56:50 crc kubenswrapper[4985]: E0127 08:56:50.094688 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlchc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bh6j9_openshift-marketplace(fa17d66c-2d07-4ce5-bfc8-45bb31adf066): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:50 crc kubenswrapper[4985]: E0127 08:56:50.095913 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bh6j9" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" Jan 27 08:56:53 crc kubenswrapper[4985]: E0127 08:56:53.591046 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bh6j9" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" Jan 27 08:56:53 crc kubenswrapper[4985]: E0127 08:56:53.591268 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lwlwv" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" Jan 27 08:56:53 crc kubenswrapper[4985]: E0127 08:56:53.686656 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 08:56:53 crc kubenswrapper[4985]: E0127 08:56:53.687317 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhhhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q87vc_openshift-marketplace(bd8d30fe-7369-4ea0-830d-b8fffca6bd10): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:53 crc kubenswrapper[4985]: E0127 08:56:53.688852 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q87vc" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.306085 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q87vc" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.390755 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.390999 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjmrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m49st_openshift-marketplace(6b2d7f94-92b7-4593-8496-31db09afdf39): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.392240 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m49st" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.415880 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.416138 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8m66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f2gdx_openshift-marketplace(e143ff56-0606-4500-bac1-21d0d3f607ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:55 crc kubenswrapper[4985]: E0127 08:56:55.417584 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f2gdx" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.517204 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 08:56:58 crc kubenswrapper[4985]: E0127 08:56:58.523000 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a120d9-822e-4b48-be12-c181dc06f093" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.523026 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a120d9-822e-4b48-be12-c181dc06f093" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: E0127 08:56:58.523039 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe1a18-7447-42d6-ae78-22060b8c517a" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.523048 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe1a18-7447-42d6-ae78-22060b8c517a" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.523167 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe1a18-7447-42d6-ae78-22060b8c517a" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.523180 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a120d9-822e-4b48-be12-c181dc06f093" containerName="pruner" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.523785 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.526455 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.526753 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.542915 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.691210 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.691285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.792612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.792727 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.792784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.818862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.841623 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x4cs4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.842082 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.864022 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.904888 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjksr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 08:56:58 crc kubenswrapper[4985]: I0127 08:56:58.904978 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.130625 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f2gdx" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.130721 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m49st" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.213647 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.216391 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.248958 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.249435 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.249449 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.249464 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.249472 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.249605 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" containerName="controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.249626 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" containerName="route-controller-manager" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.250074 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.271436 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.401666 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca\") pod \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles\") pod \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config\") pod \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402276 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert\") pod \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402309 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca\") pod \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config\") pod \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402357 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert\") pod \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402424 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpvxl\" (UniqueName: \"kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl\") pod \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\" (UID: \"72fd06a7-765f-4f95-89f1-3bd8a0fa466b\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shb5f\" (UniqueName: \"kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f\") pod \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\" (UID: \"c7eca83d-b3cb-484f-9e20-f04ceedd8c99\") " Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402708 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwc8\" (UniqueName: \"kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402802 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.402825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.403058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7eca83d-b3cb-484f-9e20-f04ceedd8c99" (UID: "c7eca83d-b3cb-484f-9e20-f04ceedd8c99"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.404421 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config" (OuterVolumeSpecName: "config") pod "c7eca83d-b3cb-484f-9e20-f04ceedd8c99" (UID: "c7eca83d-b3cb-484f-9e20-f04ceedd8c99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.404451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "72fd06a7-765f-4f95-89f1-3bd8a0fa466b" (UID: "72fd06a7-765f-4f95-89f1-3bd8a0fa466b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.404493 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config" (OuterVolumeSpecName: "config") pod "72fd06a7-765f-4f95-89f1-3bd8a0fa466b" (UID: "72fd06a7-765f-4f95-89f1-3bd8a0fa466b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.404650 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca" (OuterVolumeSpecName: "client-ca") pod "72fd06a7-765f-4f95-89f1-3bd8a0fa466b" (UID: "72fd06a7-765f-4f95-89f1-3bd8a0fa466b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.410676 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f" (OuterVolumeSpecName: "kube-api-access-shb5f") pod "c7eca83d-b3cb-484f-9e20-f04ceedd8c99" (UID: "c7eca83d-b3cb-484f-9e20-f04ceedd8c99"). InnerVolumeSpecName "kube-api-access-shb5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.410897 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7eca83d-b3cb-484f-9e20-f04ceedd8c99" (UID: "c7eca83d-b3cb-484f-9e20-f04ceedd8c99"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.414453 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl" (OuterVolumeSpecName: "kube-api-access-gpvxl") pod "72fd06a7-765f-4f95-89f1-3bd8a0fa466b" (UID: "72fd06a7-765f-4f95-89f1-3bd8a0fa466b"). InnerVolumeSpecName "kube-api-access-gpvxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.415789 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72fd06a7-765f-4f95-89f1-3bd8a0fa466b" (UID: "72fd06a7-765f-4f95-89f1-3bd8a0fa466b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.435656 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cscdv"] Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.445920 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.446117 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkfwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7k25x_openshift-marketplace(7aadedcd-5a47-4d8d-a41d-e33a7a760331): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.447596 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7k25x" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.492644 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.494381 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.494655 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nmdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hwclt_openshift-marketplace(ed57e787-5d65-4c3c-8a0f-f693481928ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.495928 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hwclt" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504682 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504737 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504793 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwc8\" (UniqueName: \"kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504890 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504904 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504912 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504923 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504937 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpvxl\" (UniqueName: \"kubernetes.io/projected/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-kube-api-access-gpvxl\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504952 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shb5f\" (UniqueName: \"kubernetes.io/projected/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-kube-api-access-shb5f\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504962 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504972 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72fd06a7-765f-4f95-89f1-3bd8a0fa466b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.504980 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eca83d-b3cb-484f-9e20-f04ceedd8c99-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.506042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.506275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.509641 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: W0127 08:56:59.510847 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod109d6cbe_0671_4faf_a4a6_e3618a17fe01.slice/crio-483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc WatchSource:0}: Error finding container 483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc: Status 404 returned error can't find the container with id 483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.526980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwc8\" (UniqueName: \"kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8\") pod \"route-controller-manager-f7b8bdcc6-pdzgn\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.571331 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.571556 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t8r5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q7trj_openshift-marketplace(da9958bf-bf1b-4894-96a8-18b5b9fa3d46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.573194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q7trj" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.579310 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.669358 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" event={"ID":"c7eca83d-b3cb-484f-9e20-f04ceedd8c99","Type":"ContainerDied","Data":"58eb2ff1a443a9d307b9a69eab07277dcbdd53a36e623c0710ed9219a98c7e16"} Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.669500 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.670685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"109d6cbe-0671-4faf-a4a6-e3618a17fe01","Type":"ContainerStarted","Data":"483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc"} Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.669946 4985 scope.go:117] "RemoveContainer" containerID="9921fadc44fd47ef950dca6f4f1496066074e9fb66b2e127760b82c662fb52f9" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.672348 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.672632 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x4cs4" event={"ID":"72fd06a7-765f-4f95-89f1-3bd8a0fa466b","Type":"ContainerDied","Data":"686b436c818b13d055111b8b8964602223193fd08abb34f11752c6411b16066a"} Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.684739 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cscdv" event={"ID":"5c870945-eecc-4954-a91b-d02cef8f98e2","Type":"ContainerStarted","Data":"46c37c82420708afebda809b73a76ea81479a4baea33aa32be8fcddf485db9e2"} Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.688543 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q7trj" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.688755 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hwclt" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" Jan 27 08:56:59 crc kubenswrapper[4985]: E0127 08:56:59.688842 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7k25x" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.710264 4985 scope.go:117] "RemoveContainer" containerID="cc6bfb4233eb7543fe122604793466cb7adc42820d39984d7edde7ce33c74e06" Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.781554 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.784359 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x4cs4"] Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.795761 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:56:59 crc kubenswrapper[4985]: I0127 08:56:59.801572 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjksr"] Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.047397 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.460725 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72fd06a7-765f-4f95-89f1-3bd8a0fa466b" path="/var/lib/kubelet/pods/72fd06a7-765f-4f95-89f1-3bd8a0fa466b/volumes" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.461921 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7eca83d-b3cb-484f-9e20-f04ceedd8c99" path="/var/lib/kubelet/pods/c7eca83d-b3cb-484f-9e20-f04ceedd8c99/volumes" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.694639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"109d6cbe-0671-4faf-a4a6-e3618a17fe01","Type":"ContainerStarted","Data":"dd950605b1a85e79dedebea3932965ed88f643795ab514b47301f4e6f0db7860"} Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.697194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cscdv" event={"ID":"5c870945-eecc-4954-a91b-d02cef8f98e2","Type":"ContainerStarted","Data":"5c91f530f8f4995b6287c2185b4f6dcbfc66ce87087c1e762de0757a76cf6044"} Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.697256 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cscdv" event={"ID":"5c870945-eecc-4954-a91b-d02cef8f98e2","Type":"ContainerStarted","Data":"7e8b5edd35b66ad1c97f5790fa0b9e30e3b622943c49fc14dda992a093928eca"} Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.702646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" event={"ID":"88dc63eb-4862-4600-9621-7931835c8091","Type":"ContainerStarted","Data":"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2"} Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.702708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" event={"ID":"88dc63eb-4862-4600-9621-7931835c8091","Type":"ContainerStarted","Data":"cc9c861f5f5bd3a2a6ffa248bba476f663367bb679f1fa79c9d9fe7d3aafcee8"} Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.702945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.718921 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.718885925 podStartE2EDuration="2.718885925s" podCreationTimestamp="2026-01-27 08:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:00.715013517 +0000 UTC m=+205.006108368" watchObservedRunningTime="2026-01-27 08:57:00.718885925 +0000 UTC m=+205.009980756" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.735565 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cscdv" podStartSLOduration=178.735542746 podStartE2EDuration="2m58.735542746s" podCreationTimestamp="2026-01-27 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:00.73313883 +0000 UTC m=+205.024233681" watchObservedRunningTime="2026-01-27 08:57:00.735542746 +0000 UTC m=+205.026637587" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.756959 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" podStartSLOduration=7.7569342599999995 podStartE2EDuration="7.75693426s" podCreationTimestamp="2026-01-27 08:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:00.75547706 +0000 UTC m=+205.046571911" watchObservedRunningTime="2026-01-27 08:57:00.75693426 +0000 UTC m=+205.048029101" Jan 27 08:57:00 crc kubenswrapper[4985]: I0127 08:57:00.809610 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:57:01 crc kubenswrapper[4985]: I0127 08:57:01.749640 4985 generic.go:334] "Generic (PLEG): container finished" podID="109d6cbe-0671-4faf-a4a6-e3618a17fe01" containerID="dd950605b1a85e79dedebea3932965ed88f643795ab514b47301f4e6f0db7860" exitCode=0 Jan 27 08:57:01 crc kubenswrapper[4985]: I0127 08:57:01.750639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"109d6cbe-0671-4faf-a4a6-e3618a17fe01","Type":"ContainerDied","Data":"dd950605b1a85e79dedebea3932965ed88f643795ab514b47301f4e6f0db7860"} Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.165614 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.166543 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.168631 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.169604 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.169629 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.169638 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.173122 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.173399 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.178405 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.180038 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.247034 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.247489 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.247702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.247839 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p447t\" (UniqueName: \"kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.248005 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.348919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.349003 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.349029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.349053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.349074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p447t\" (UniqueName: \"kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.350917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.351156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.351506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.356316 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.366628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p447t\" (UniqueName: \"kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t\") pod \"controller-manager-764cf4c647-r7h2t\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.488051 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.712984 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.756271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" event={"ID":"db494a9f-fd93-49eb-8d22-96279991db94","Type":"ContainerStarted","Data":"b63aad30d3d84b5818f27181e51df13513a350df44494e74335ee98ff8046b54"} Jan 27 08:57:02 crc kubenswrapper[4985]: I0127 08:57:02.934645 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.062832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access\") pod \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.062924 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir\") pod \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\" (UID: \"109d6cbe-0671-4faf-a4a6-e3618a17fe01\") " Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.063157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "109d6cbe-0671-4faf-a4a6-e3618a17fe01" (UID: "109d6cbe-0671-4faf-a4a6-e3618a17fe01"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.063399 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.075950 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "109d6cbe-0671-4faf-a4a6-e3618a17fe01" (UID: "109d6cbe-0671-4faf-a4a6-e3618a17fe01"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.164634 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109d6cbe-0671-4faf-a4a6-e3618a17fe01-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.765260 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" event={"ID":"db494a9f-fd93-49eb-8d22-96279991db94","Type":"ContainerStarted","Data":"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5"} Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.765681 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.767558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"109d6cbe-0671-4faf-a4a6-e3618a17fe01","Type":"ContainerDied","Data":"483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc"} Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.767590 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="483a0d4b0936d3bbf8d72946e6237462061e62824bc5041e265408a11ab7c6cc" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.767673 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.771694 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:03 crc kubenswrapper[4985]: I0127 08:57:03.791198 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" podStartSLOduration=10.791125327 podStartE2EDuration="10.791125327s" podCreationTimestamp="2026-01-27 08:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:03.788990157 +0000 UTC m=+208.080085018" watchObservedRunningTime="2026-01-27 08:57:03.791125327 +0000 UTC m=+208.082220178" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.911884 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 08:57:04 crc kubenswrapper[4985]: E0127 08:57:04.912502 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109d6cbe-0671-4faf-a4a6-e3618a17fe01" containerName="pruner" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.912536 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="109d6cbe-0671-4faf-a4a6-e3618a17fe01" containerName="pruner" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.912665 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="109d6cbe-0671-4faf-a4a6-e3618a17fe01" containerName="pruner" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.913106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.916864 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.917590 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.925506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.994413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.994584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:04 crc kubenswrapper[4985]: I0127 08:57:04.994673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.096639 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.096724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.096808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.096892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.096996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.119975 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access\") pod \"installer-9-crc\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.241795 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.715663 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 08:57:05 crc kubenswrapper[4985]: W0127 08:57:05.723858 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70566b0c_fabd_4c21_bf39_f772dee30b6a.slice/crio-74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a WatchSource:0}: Error finding container 74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a: Status 404 returned error can't find the container with id 74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.787733 4985 generic.go:334] "Generic (PLEG): container finished" podID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerID="45acbdc1f00d1ea0e6723d7b0a657807eaab94575ab31eb35b86bc38aef4eeda" exitCode=0 Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.787836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerDied","Data":"45acbdc1f00d1ea0e6723d7b0a657807eaab94575ab31eb35b86bc38aef4eeda"} Jan 27 08:57:05 crc kubenswrapper[4985]: I0127 08:57:05.790982 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70566b0c-fabd-4c21-bf39-f772dee30b6a","Type":"ContainerStarted","Data":"74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a"} Jan 27 08:57:06 crc kubenswrapper[4985]: I0127 08:57:06.800883 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerStarted","Data":"91ecd2a2c2c600d35cf35173a4ed34cbc61fd738dec6ba7c7dcddb2fcda93bec"} Jan 27 08:57:06 crc kubenswrapper[4985]: I0127 08:57:06.803131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70566b0c-fabd-4c21-bf39-f772dee30b6a","Type":"ContainerStarted","Data":"3c16c5009496f35daacaf7d1d82d1e906f0f258812ca83eff9aefeba7868b129"} Jan 27 08:57:06 crc kubenswrapper[4985]: I0127 08:57:06.822527 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lwlwv" podStartSLOduration=2.476342496 podStartE2EDuration="49.822489976s" podCreationTimestamp="2026-01-27 08:56:17 +0000 UTC" firstStartedPulling="2026-01-27 08:56:19.096144759 +0000 UTC m=+163.387239600" lastFinishedPulling="2026-01-27 08:57:06.442292239 +0000 UTC m=+210.733387080" observedRunningTime="2026-01-27 08:57:06.81939541 +0000 UTC m=+211.110490251" watchObservedRunningTime="2026-01-27 08:57:06.822489976 +0000 UTC m=+211.113584817" Jan 27 08:57:06 crc kubenswrapper[4985]: I0127 08:57:06.838030 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.838004137 podStartE2EDuration="2.838004137s" podCreationTimestamp="2026-01-27 08:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:06.835144347 +0000 UTC m=+211.126239208" watchObservedRunningTime="2026-01-27 08:57:06.838004137 +0000 UTC m=+211.129098978" Jan 27 08:57:07 crc kubenswrapper[4985]: I0127 08:57:07.582900 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:57:07 crc kubenswrapper[4985]: I0127 08:57:07.811662 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerID="6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0" exitCode=0 Jan 27 08:57:07 crc kubenswrapper[4985]: I0127 08:57:07.812589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerDied","Data":"6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0"} Jan 27 08:57:08 crc kubenswrapper[4985]: I0127 08:57:08.000664 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:57:08 crc kubenswrapper[4985]: I0127 08:57:08.000799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:57:08 crc kubenswrapper[4985]: I0127 08:57:08.821467 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerStarted","Data":"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89"} Jan 27 08:57:08 crc kubenswrapper[4985]: I0127 08:57:08.845206 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bh6j9" podStartSLOduration=3.569819722 podStartE2EDuration="51.845184792s" podCreationTimestamp="2026-01-27 08:56:17 +0000 UTC" firstStartedPulling="2026-01-27 08:56:20.158704767 +0000 UTC m=+164.449799608" lastFinishedPulling="2026-01-27 08:57:08.434069837 +0000 UTC m=+212.725164678" observedRunningTime="2026-01-27 08:57:08.843772893 +0000 UTC m=+213.134867734" watchObservedRunningTime="2026-01-27 08:57:08.845184792 +0000 UTC m=+213.136279633" Jan 27 08:57:09 crc kubenswrapper[4985]: I0127 08:57:09.129866 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lwlwv" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="registry-server" probeResult="failure" output=< Jan 27 08:57:09 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:57:09 crc kubenswrapper[4985]: > Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.829342 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.830009 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.830090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.831014 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.831137 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da" gracePeriod=600 Jan 27 08:57:11 crc kubenswrapper[4985]: I0127 08:57:11.839564 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerStarted","Data":"311dddc869065d293fd5de529d0940ca325fd616b8c5447d8b613d6b659a25bf"} Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.848492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerStarted","Data":"95691700c45ca7207fd70f0968c76a3707a5b163932c54d4edb75b443a3229b7"} Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.852728 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da" exitCode=0 Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.852829 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da"} Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.852922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f"} Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.855578 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerID="311dddc869065d293fd5de529d0940ca325fd616b8c5447d8b613d6b659a25bf" exitCode=0 Jan 27 08:57:12 crc kubenswrapper[4985]: I0127 08:57:12.855642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerDied","Data":"311dddc869065d293fd5de529d0940ca325fd616b8c5447d8b613d6b659a25bf"} Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.882563 4985 generic.go:334] "Generic (PLEG): container finished" podID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerID="95691700c45ca7207fd70f0968c76a3707a5b163932c54d4edb75b443a3229b7" exitCode=0 Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.883168 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerDied","Data":"95691700c45ca7207fd70f0968c76a3707a5b163932c54d4edb75b443a3229b7"} Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.892544 4985 generic.go:334] "Generic (PLEG): container finished" podID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerID="c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de" exitCode=0 Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.892730 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerDied","Data":"c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de"} Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.911182 4985 generic.go:334] "Generic (PLEG): container finished" podID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerID="2ee2a45a10d2c59bcaebd52d6f4303a077a4d6c34f764594259a776d847d1984" exitCode=0 Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.911249 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerDied","Data":"2ee2a45a10d2c59bcaebd52d6f4303a077a4d6c34f764594259a776d847d1984"} Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.915368 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.915700 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" podUID="db494a9f-fd93-49eb-8d22-96279991db94" containerName="controller-manager" containerID="cri-o://0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5" gracePeriod=30 Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.953734 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:57:13 crc kubenswrapper[4985]: I0127 08:57:13.954163 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" podUID="88dc63eb-4862-4600-9621-7931835c8091" containerName="route-controller-manager" containerID="cri-o://34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2" gracePeriod=30 Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.575146 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.586274 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642549 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles\") pod \"db494a9f-fd93-49eb-8d22-96279991db94\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642633 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca\") pod \"db494a9f-fd93-49eb-8d22-96279991db94\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642715 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mwc8\" (UniqueName: \"kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8\") pod \"88dc63eb-4862-4600-9621-7931835c8091\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642771 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert\") pod \"88dc63eb-4862-4600-9621-7931835c8091\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642803 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config\") pod \"88dc63eb-4862-4600-9621-7931835c8091\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p447t\" (UniqueName: \"kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t\") pod \"db494a9f-fd93-49eb-8d22-96279991db94\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config\") pod \"db494a9f-fd93-49eb-8d22-96279991db94\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.642994 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert\") pod \"db494a9f-fd93-49eb-8d22-96279991db94\" (UID: \"db494a9f-fd93-49eb-8d22-96279991db94\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.643025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca\") pod \"88dc63eb-4862-4600-9621-7931835c8091\" (UID: \"88dc63eb-4862-4600-9621-7931835c8091\") " Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.643758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db494a9f-fd93-49eb-8d22-96279991db94" (UID: "db494a9f-fd93-49eb-8d22-96279991db94"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.644491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config" (OuterVolumeSpecName: "config") pod "88dc63eb-4862-4600-9621-7931835c8091" (UID: "88dc63eb-4862-4600-9621-7931835c8091"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.646088 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca" (OuterVolumeSpecName: "client-ca") pod "db494a9f-fd93-49eb-8d22-96279991db94" (UID: "db494a9f-fd93-49eb-8d22-96279991db94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.646269 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca" (OuterVolumeSpecName: "client-ca") pod "88dc63eb-4862-4600-9621-7931835c8091" (UID: "88dc63eb-4862-4600-9621-7931835c8091"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.646424 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config" (OuterVolumeSpecName: "config") pod "db494a9f-fd93-49eb-8d22-96279991db94" (UID: "db494a9f-fd93-49eb-8d22-96279991db94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.654705 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88dc63eb-4862-4600-9621-7931835c8091" (UID: "88dc63eb-4862-4600-9621-7931835c8091"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.654741 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8" (OuterVolumeSpecName: "kube-api-access-5mwc8") pod "88dc63eb-4862-4600-9621-7931835c8091" (UID: "88dc63eb-4862-4600-9621-7931835c8091"). InnerVolumeSpecName "kube-api-access-5mwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.655312 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t" (OuterVolumeSpecName: "kube-api-access-p447t") pod "db494a9f-fd93-49eb-8d22-96279991db94" (UID: "db494a9f-fd93-49eb-8d22-96279991db94"). InnerVolumeSpecName "kube-api-access-p447t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.691901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db494a9f-fd93-49eb-8d22-96279991db94" (UID: "db494a9f-fd93-49eb-8d22-96279991db94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745048 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745085 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db494a9f-fd93-49eb-8d22-96279991db94-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745096 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745107 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745122 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db494a9f-fd93-49eb-8d22-96279991db94-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745134 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mwc8\" (UniqueName: \"kubernetes.io/projected/88dc63eb-4862-4600-9621-7931835c8091-kube-api-access-5mwc8\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745146 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88dc63eb-4862-4600-9621-7931835c8091-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745160 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88dc63eb-4862-4600-9621-7931835c8091-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.745171 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p447t\" (UniqueName: \"kubernetes.io/projected/db494a9f-fd93-49eb-8d22-96279991db94-kube-api-access-p447t\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.918664 4985 generic.go:334] "Generic (PLEG): container finished" podID="db494a9f-fd93-49eb-8d22-96279991db94" containerID="0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5" exitCode=0 Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.918753 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.918766 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" event={"ID":"db494a9f-fd93-49eb-8d22-96279991db94","Type":"ContainerDied","Data":"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.918821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764cf4c647-r7h2t" event={"ID":"db494a9f-fd93-49eb-8d22-96279991db94","Type":"ContainerDied","Data":"b63aad30d3d84b5818f27181e51df13513a350df44494e74335ee98ff8046b54"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.918846 4985 scope.go:117] "RemoveContainer" containerID="0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.924963 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerStarted","Data":"92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.934334 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerStarted","Data":"516c868f674fa45ce610eee09d9649a449ca746280fc0adc3ec307735aca66ac"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.936873 4985 generic.go:334] "Generic (PLEG): container finished" podID="88dc63eb-4862-4600-9621-7931835c8091" containerID="34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2" exitCode=0 Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.936964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" event={"ID":"88dc63eb-4862-4600-9621-7931835c8091","Type":"ContainerDied","Data":"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.937000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" event={"ID":"88dc63eb-4862-4600-9621-7931835c8091","Type":"ContainerDied","Data":"cc9c861f5f5bd3a2a6ffa248bba476f663367bb679f1fa79c9d9fe7d3aafcee8"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.937061 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.941776 4985 scope.go:117] "RemoveContainer" containerID="0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5" Jan 27 08:57:14 crc kubenswrapper[4985]: E0127 08:57:14.942787 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5\": container with ID starting with 0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5 not found: ID does not exist" containerID="0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.942842 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5"} err="failed to get container status \"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5\": rpc error: code = NotFound desc = could not find container \"0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5\": container with ID starting with 0a883e6d45b711f471211d249b4e931df89082a1d1a7bc29a18845e424e7a7a5 not found: ID does not exist" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.942876 4985 scope.go:117] "RemoveContainer" containerID="34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.955567 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerStarted","Data":"eae09ffc6f882532b32962ae46a36b71b9cf27fc87f998818afda734848d7b41"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.958387 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f2gdx" podStartSLOduration=3.534174679 podStartE2EDuration="59.95836661s" podCreationTimestamp="2026-01-27 08:56:15 +0000 UTC" firstStartedPulling="2026-01-27 08:56:18.021674371 +0000 UTC m=+162.312769212" lastFinishedPulling="2026-01-27 08:57:14.445866302 +0000 UTC m=+218.736961143" observedRunningTime="2026-01-27 08:57:14.955891661 +0000 UTC m=+219.246986502" watchObservedRunningTime="2026-01-27 08:57:14.95836661 +0000 UTC m=+219.249461451" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.965960 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerStarted","Data":"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17"} Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.979119 4985 scope.go:117] "RemoveContainer" containerID="34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2" Jan 27 08:57:14 crc kubenswrapper[4985]: E0127 08:57:14.980416 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2\": container with ID starting with 34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2 not found: ID does not exist" containerID="34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2" Jan 27 08:57:14 crc kubenswrapper[4985]: I0127 08:57:14.980475 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2"} err="failed to get container status \"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2\": rpc error: code = NotFound desc = could not find container \"34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2\": container with ID starting with 34cd7102abd95cb347717401e8740f14e4f809ddcd000a39734af5af719096c2 not found: ID does not exist" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.011021 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q87vc" podStartSLOduration=4.25236604 podStartE2EDuration="1m0.010982169s" podCreationTimestamp="2026-01-27 08:56:15 +0000 UTC" firstStartedPulling="2026-01-27 08:56:18.08087887 +0000 UTC m=+162.371973711" lastFinishedPulling="2026-01-27 08:57:13.839494999 +0000 UTC m=+218.130589840" observedRunningTime="2026-01-27 08:57:14.987747915 +0000 UTC m=+219.278842766" watchObservedRunningTime="2026-01-27 08:57:15.010982169 +0000 UTC m=+219.302077010" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.029087 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7k25x" podStartSLOduration=3.905150789 podStartE2EDuration="57.029063881s" podCreationTimestamp="2026-01-27 08:56:18 +0000 UTC" firstStartedPulling="2026-01-27 08:56:21.226380937 +0000 UTC m=+165.517475778" lastFinishedPulling="2026-01-27 08:57:14.350294029 +0000 UTC m=+218.641388870" observedRunningTime="2026-01-27 08:57:15.013101088 +0000 UTC m=+219.304195929" watchObservedRunningTime="2026-01-27 08:57:15.029063881 +0000 UTC m=+219.320158722" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.032074 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.039714 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-764cf4c647-r7h2t"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.054976 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.061218 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7b8bdcc6-pdzgn"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.081527 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m49st" podStartSLOduration=3.737490113 podStartE2EDuration="1m0.081487675s" podCreationTimestamp="2026-01-27 08:56:15 +0000 UTC" firstStartedPulling="2026-01-27 08:56:17.955326834 +0000 UTC m=+162.246421675" lastFinishedPulling="2026-01-27 08:57:14.299324396 +0000 UTC m=+218.590419237" observedRunningTime="2026-01-27 08:57:15.077997458 +0000 UTC m=+219.369092299" watchObservedRunningTime="2026-01-27 08:57:15.081487675 +0000 UTC m=+219.372582516" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.180447 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:15 crc kubenswrapper[4985]: E0127 08:57:15.180858 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db494a9f-fd93-49eb-8d22-96279991db94" containerName="controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.180874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="db494a9f-fd93-49eb-8d22-96279991db94" containerName="controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: E0127 08:57:15.180896 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88dc63eb-4862-4600-9621-7931835c8091" containerName="route-controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.180902 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="88dc63eb-4862-4600-9621-7931835c8091" containerName="route-controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.181024 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="db494a9f-fd93-49eb-8d22-96279991db94" containerName="controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.181039 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="88dc63eb-4862-4600-9621-7931835c8091" containerName="route-controller-manager" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.181588 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.182699 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.183691 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.189171 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.189229 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.189950 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.190150 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.190268 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.190351 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.190437 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.190724 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.191040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.191436 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.192474 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.192860 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.201112 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.209294 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.261812 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.261900 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.261935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.261972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgsb\" (UniqueName: \"kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.262018 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.262039 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxklf\" (UniqueName: \"kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.262092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.262121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.262151 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.263657 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.364452 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.363381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.364590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.365489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.365585 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cgsb\" (UniqueName: \"kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.365669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxklf\" (UniqueName: \"kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.365698 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.366293 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.367312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.367359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.367416 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.368423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.369049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.372683 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.372683 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.379364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.386786 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cgsb\" (UniqueName: \"kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb\") pod \"route-controller-manager-76f76c9744-xqhw2\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.387213 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxklf\" (UniqueName: \"kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf\") pod \"controller-manager-5cf8b98fd-lt7cv\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.504153 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.573779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.754197 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:15 crc kubenswrapper[4985]: W0127 08:57:15.765918 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b3e850a_df48_4fbe_bad9_087c34dbdc7d.slice/crio-672e84929a4c3dd937033895f4dc34674d6fa40c32d61d0f1b483db0d88425f7 WatchSource:0}: Error finding container 672e84929a4c3dd937033895f4dc34674d6fa40c32d61d0f1b483db0d88425f7: Status 404 returned error can't find the container with id 672e84929a4c3dd937033895f4dc34674d6fa40c32d61d0f1b483db0d88425f7 Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.900534 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:15 crc kubenswrapper[4985]: W0127 08:57:15.920161 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21a770bf_c6eb_4287_a160_2b6ba7ab4b7d.slice/crio-d0048480232552d26228bc5e9bef56e698989bd09116b79cbf5f784d49dde865 WatchSource:0}: Error finding container d0048480232552d26228bc5e9bef56e698989bd09116b79cbf5f784d49dde865: Status 404 returned error can't find the container with id d0048480232552d26228bc5e9bef56e698989bd09116b79cbf5f784d49dde865 Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.946425 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.946527 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.980483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" event={"ID":"1b3e850a-df48-4fbe-bad9-087c34dbdc7d","Type":"ContainerStarted","Data":"672e84929a4c3dd937033895f4dc34674d6fa40c32d61d0f1b483db0d88425f7"} Jan 27 08:57:15 crc kubenswrapper[4985]: I0127 08:57:15.984107 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" event={"ID":"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d","Type":"ContainerStarted","Data":"d0048480232552d26228bc5e9bef56e698989bd09116b79cbf5f784d49dde865"} Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.161710 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.161786 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.260736 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.355702 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.355765 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.462461 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88dc63eb-4862-4600-9621-7931835c8091" path="/var/lib/kubelet/pods/88dc63eb-4862-4600-9621-7931835c8091/volumes" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.463885 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db494a9f-fd93-49eb-8d22-96279991db94" path="/var/lib/kubelet/pods/db494a9f-fd93-49eb-8d22-96279991db94/volumes" Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.990734 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" event={"ID":"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d","Type":"ContainerStarted","Data":"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4"} Jan 27 08:57:16 crc kubenswrapper[4985]: I0127 08:57:16.994099 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" event={"ID":"1b3e850a-df48-4fbe-bad9-087c34dbdc7d","Type":"ContainerStarted","Data":"c6c476ccb1d4130ce4e7be6a37c3215b95d2ca294223b09276e44843a9107f87"} Jan 27 08:57:17 crc kubenswrapper[4985]: I0127 08:57:17.003525 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-f2gdx" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" probeResult="failure" output=< Jan 27 08:57:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:57:17 crc kubenswrapper[4985]: > Jan 27 08:57:17 crc kubenswrapper[4985]: I0127 08:57:17.039108 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" podStartSLOduration=4.039087185 podStartE2EDuration="4.039087185s" podCreationTimestamp="2026-01-27 08:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:17.037581573 +0000 UTC m=+221.328676414" watchObservedRunningTime="2026-01-27 08:57:17.039087185 +0000 UTC m=+221.330182026" Jan 27 08:57:17 crc kubenswrapper[4985]: I0127 08:57:17.402197 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m49st" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="registry-server" probeResult="failure" output=< Jan 27 08:57:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:57:17 crc kubenswrapper[4985]: > Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.000230 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.006176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.036481 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" podStartSLOduration=4.036444964 podStartE2EDuration="4.036444964s" podCreationTimestamp="2026-01-27 08:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:18.027415174 +0000 UTC m=+222.318510035" watchObservedRunningTime="2026-01-27 08:57:18.036444964 +0000 UTC m=+222.327539805" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.070470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.118859 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.402232 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.402308 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:18 crc kubenswrapper[4985]: I0127 08:57:18.445887 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:19 crc kubenswrapper[4985]: I0127 08:57:19.059952 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:19 crc kubenswrapper[4985]: I0127 08:57:19.397310 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:19 crc kubenswrapper[4985]: I0127 08:57:19.397366 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:20 crc kubenswrapper[4985]: I0127 08:57:20.438602 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7k25x" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="registry-server" probeResult="failure" output=< Jan 27 08:57:20 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:57:20 crc kubenswrapper[4985]: > Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.222026 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.222358 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bh6j9" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="registry-server" containerID="cri-o://bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89" gracePeriod=2 Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.704020 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.765900 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities\") pod \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.765958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content\") pod \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.766044 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlchc\" (UniqueName: \"kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc\") pod \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\" (UID: \"fa17d66c-2d07-4ce5-bfc8-45bb31adf066\") " Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.767399 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities" (OuterVolumeSpecName: "utilities") pod "fa17d66c-2d07-4ce5-bfc8-45bb31adf066" (UID: "fa17d66c-2d07-4ce5-bfc8-45bb31adf066"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.773774 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc" (OuterVolumeSpecName: "kube-api-access-zlchc") pod "fa17d66c-2d07-4ce5-bfc8-45bb31adf066" (UID: "fa17d66c-2d07-4ce5-bfc8-45bb31adf066"). InnerVolumeSpecName "kube-api-access-zlchc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.798037 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa17d66c-2d07-4ce5-bfc8-45bb31adf066" (UID: "fa17d66c-2d07-4ce5-bfc8-45bb31adf066"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.867808 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.867860 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:21 crc kubenswrapper[4985]: I0127 08:57:21.867874 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlchc\" (UniqueName: \"kubernetes.io/projected/fa17d66c-2d07-4ce5-bfc8-45bb31adf066-kube-api-access-zlchc\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.026213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerStarted","Data":"00418bed6214ddcef0628809d34ff42d17282cb8d8351b70978a93ebfcde8e6a"} Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.030414 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerID="bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89" exitCode=0 Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.030479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerDied","Data":"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89"} Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.030533 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bh6j9" event={"ID":"fa17d66c-2d07-4ce5-bfc8-45bb31adf066","Type":"ContainerDied","Data":"904d0b49e42cf2c12f533df541e0c4bd67ada101d9720b23c584a58c821d16df"} Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.030579 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bh6j9" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.030607 4985 scope.go:117] "RemoveContainer" containerID="bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.037991 4985 generic.go:334] "Generic (PLEG): container finished" podID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerID="b83a8b6546b77fe300a9bc8ac8752fe8a69ce78e491c5f4bafeae1dae4b7cf35" exitCode=0 Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.038033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerDied","Data":"b83a8b6546b77fe300a9bc8ac8752fe8a69ce78e491c5f4bafeae1dae4b7cf35"} Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.065176 4985 scope.go:117] "RemoveContainer" containerID="6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.092616 4985 scope.go:117] "RemoveContainer" containerID="3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.098180 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.102629 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bh6j9"] Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.116431 4985 scope.go:117] "RemoveContainer" containerID="bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89" Jan 27 08:57:22 crc kubenswrapper[4985]: E0127 08:57:22.117257 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89\": container with ID starting with bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89 not found: ID does not exist" containerID="bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.117340 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89"} err="failed to get container status \"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89\": rpc error: code = NotFound desc = could not find container \"bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89\": container with ID starting with bf9575c47301b9325609a9caaa7a6519be9aa125da48a80c9e7c8dab46fe8b89 not found: ID does not exist" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.117386 4985 scope.go:117] "RemoveContainer" containerID="6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0" Jan 27 08:57:22 crc kubenswrapper[4985]: E0127 08:57:22.117854 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0\": container with ID starting with 6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0 not found: ID does not exist" containerID="6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.117903 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0"} err="failed to get container status \"6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0\": rpc error: code = NotFound desc = could not find container \"6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0\": container with ID starting with 6c8312ae5ac6c2bca38761a8e6a5b39d6bfbc855d2f4a33419a9d035931795f0 not found: ID does not exist" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.117940 4985 scope.go:117] "RemoveContainer" containerID="3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b" Jan 27 08:57:22 crc kubenswrapper[4985]: E0127 08:57:22.118443 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b\": container with ID starting with 3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b not found: ID does not exist" containerID="3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.118472 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b"} err="failed to get container status \"3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b\": rpc error: code = NotFound desc = could not find container \"3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b\": container with ID starting with 3c96c503b38ec6f220ad13ebd9d1408efee2ff72cb91a5e4e87e7afdc994ab1b not found: ID does not exist" Jan 27 08:57:22 crc kubenswrapper[4985]: I0127 08:57:22.463787 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" path="/var/lib/kubelet/pods/fa17d66c-2d07-4ce5-bfc8-45bb31adf066/volumes" Jan 27 08:57:23 crc kubenswrapper[4985]: I0127 08:57:23.047425 4985 generic.go:334] "Generic (PLEG): container finished" podID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerID="00418bed6214ddcef0628809d34ff42d17282cb8d8351b70978a93ebfcde8e6a" exitCode=0 Jan 27 08:57:23 crc kubenswrapper[4985]: I0127 08:57:23.047560 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerDied","Data":"00418bed6214ddcef0628809d34ff42d17282cb8d8351b70978a93ebfcde8e6a"} Jan 27 08:57:23 crc kubenswrapper[4985]: I0127 08:57:23.052745 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerStarted","Data":"a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6"} Jan 27 08:57:23 crc kubenswrapper[4985]: I0127 08:57:23.089269 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7trj" podStartSLOduration=3.595150234 podStartE2EDuration="1m8.089244994s" podCreationTimestamp="2026-01-27 08:56:15 +0000 UTC" firstStartedPulling="2026-01-27 08:56:17.962992346 +0000 UTC m=+162.254087187" lastFinishedPulling="2026-01-27 08:57:22.457087106 +0000 UTC m=+226.748181947" observedRunningTime="2026-01-27 08:57:23.086189439 +0000 UTC m=+227.377284280" watchObservedRunningTime="2026-01-27 08:57:23.089244994 +0000 UTC m=+227.380339835" Jan 27 08:57:24 crc kubenswrapper[4985]: I0127 08:57:24.061728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerStarted","Data":"080280e744c6d48ef75d96557069fd28ef9f8ed509a6a612dc18b51ade773982"} Jan 27 08:57:24 crc kubenswrapper[4985]: I0127 08:57:24.084709 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwclt" podStartSLOduration=2.642016423 podStartE2EDuration="1m6.08468779s" podCreationTimestamp="2026-01-27 08:56:18 +0000 UTC" firstStartedPulling="2026-01-27 08:56:20.188888993 +0000 UTC m=+164.479983834" lastFinishedPulling="2026-01-27 08:57:23.63156036 +0000 UTC m=+227.922655201" observedRunningTime="2026-01-27 08:57:24.081844001 +0000 UTC m=+228.372938862" watchObservedRunningTime="2026-01-27 08:57:24.08468779 +0000 UTC m=+228.375782631" Jan 27 08:57:25 crc kubenswrapper[4985]: I0127 08:57:25.574322 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:25 crc kubenswrapper[4985]: I0127 08:57:25.582593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:25 crc kubenswrapper[4985]: I0127 08:57:25.734321 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:57:25 crc kubenswrapper[4985]: I0127 08:57:25.734417 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:57:25 crc kubenswrapper[4985]: I0127 08:57:25.790240 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:57:26 crc kubenswrapper[4985]: I0127 08:57:26.013215 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:57:26 crc kubenswrapper[4985]: I0127 08:57:26.077773 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:57:26 crc kubenswrapper[4985]: I0127 08:57:26.211228 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:26 crc kubenswrapper[4985]: I0127 08:57:26.409640 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:26 crc kubenswrapper[4985]: I0127 08:57:26.460645 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:27 crc kubenswrapper[4985]: I0127 08:57:27.630605 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:57:27 crc kubenswrapper[4985]: I0127 08:57:27.632670 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q87vc" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="registry-server" containerID="cri-o://516c868f674fa45ce610eee09d9649a449ca746280fc0adc3ec307735aca66ac" gracePeriod=2 Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.122106 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerID="516c868f674fa45ce610eee09d9649a449ca746280fc0adc3ec307735aca66ac" exitCode=0 Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.122180 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerDied","Data":"516c868f674fa45ce610eee09d9649a449ca746280fc0adc3ec307735aca66ac"} Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.774037 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.880151 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhhg\" (UniqueName: \"kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg\") pod \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.880236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities\") pod \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.880264 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content\") pod \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\" (UID: \"bd8d30fe-7369-4ea0-830d-b8fffca6bd10\") " Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.881591 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities" (OuterVolumeSpecName: "utilities") pod "bd8d30fe-7369-4ea0-830d-b8fffca6bd10" (UID: "bd8d30fe-7369-4ea0-830d-b8fffca6bd10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.888775 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg" (OuterVolumeSpecName: "kube-api-access-fhhhg") pod "bd8d30fe-7369-4ea0-830d-b8fffca6bd10" (UID: "bd8d30fe-7369-4ea0-830d-b8fffca6bd10"). InnerVolumeSpecName "kube-api-access-fhhhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.924899 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd8d30fe-7369-4ea0-830d-b8fffca6bd10" (UID: "bd8d30fe-7369-4ea0-830d-b8fffca6bd10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.938666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.938747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.981463 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhhg\" (UniqueName: \"kubernetes.io/projected/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-kube-api-access-fhhhg\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.981544 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:28 crc kubenswrapper[4985]: I0127 08:57:28.981598 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8d30fe-7369-4ea0-830d-b8fffca6bd10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.130435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q87vc" event={"ID":"bd8d30fe-7369-4ea0-830d-b8fffca6bd10","Type":"ContainerDied","Data":"4e80fe23d80db08d2fb6f2031e99a2b71ad2247750ef16ea136e83d65cbb6d64"} Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.130493 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q87vc" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.130536 4985 scope.go:117] "RemoveContainer" containerID="516c868f674fa45ce610eee09d9649a449ca746280fc0adc3ec307735aca66ac" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.147690 4985 scope.go:117] "RemoveContainer" containerID="311dddc869065d293fd5de529d0940ca325fd616b8c5447d8b613d6b659a25bf" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.168133 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.175300 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q87vc"] Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.190166 4985 scope.go:117] "RemoveContainer" containerID="60dd63881cb2a904c421f2986e908adfc37695e87668b2c5f41226f599ba717c" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.442493 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.496195 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:29 crc kubenswrapper[4985]: I0127 08:57:29.989153 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hwclt" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="registry-server" probeResult="failure" output=< Jan 27 08:57:29 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:57:29 crc kubenswrapper[4985]: > Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.022889 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.023301 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m49st" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="registry-server" containerID="cri-o://481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17" gracePeriod=2 Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.469776 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" path="/var/lib/kubelet/pods/bd8d30fe-7369-4ea0-830d-b8fffca6bd10/volumes" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.533141 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.709049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjmrj\" (UniqueName: \"kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj\") pod \"6b2d7f94-92b7-4593-8496-31db09afdf39\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.709285 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities\") pod \"6b2d7f94-92b7-4593-8496-31db09afdf39\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.709423 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content\") pod \"6b2d7f94-92b7-4593-8496-31db09afdf39\" (UID: \"6b2d7f94-92b7-4593-8496-31db09afdf39\") " Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.710143 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities" (OuterVolumeSpecName: "utilities") pod "6b2d7f94-92b7-4593-8496-31db09afdf39" (UID: "6b2d7f94-92b7-4593-8496-31db09afdf39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.710762 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.713083 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj" (OuterVolumeSpecName: "kube-api-access-gjmrj") pod "6b2d7f94-92b7-4593-8496-31db09afdf39" (UID: "6b2d7f94-92b7-4593-8496-31db09afdf39"). InnerVolumeSpecName "kube-api-access-gjmrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.770319 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b2d7f94-92b7-4593-8496-31db09afdf39" (UID: "6b2d7f94-92b7-4593-8496-31db09afdf39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.813765 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2d7f94-92b7-4593-8496-31db09afdf39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:30 crc kubenswrapper[4985]: I0127 08:57:30.813911 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjmrj\" (UniqueName: \"kubernetes.io/projected/6b2d7f94-92b7-4593-8496-31db09afdf39-kube-api-access-gjmrj\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.148729 4985 generic.go:334] "Generic (PLEG): container finished" podID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerID="481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17" exitCode=0 Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.148791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerDied","Data":"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17"} Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.148884 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m49st" event={"ID":"6b2d7f94-92b7-4593-8496-31db09afdf39","Type":"ContainerDied","Data":"633ceb15f583ecca9f85fb8454223b06ceeef28c3151a2672a60b99a2f9b2219"} Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.148919 4985 scope.go:117] "RemoveContainer" containerID="481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.149810 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m49st" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.174158 4985 scope.go:117] "RemoveContainer" containerID="c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.187066 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.196813 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m49st"] Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.203167 4985 scope.go:117] "RemoveContainer" containerID="79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.225288 4985 scope.go:117] "RemoveContainer" containerID="481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17" Jan 27 08:57:31 crc kubenswrapper[4985]: E0127 08:57:31.225753 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17\": container with ID starting with 481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17 not found: ID does not exist" containerID="481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.225796 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17"} err="failed to get container status \"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17\": rpc error: code = NotFound desc = could not find container \"481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17\": container with ID starting with 481973bc4071e03d0d9cb95c606c90b1cdafea69264ca5a1316b79e92694be17 not found: ID does not exist" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.225829 4985 scope.go:117] "RemoveContainer" containerID="c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de" Jan 27 08:57:31 crc kubenswrapper[4985]: E0127 08:57:31.226266 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de\": container with ID starting with c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de not found: ID does not exist" containerID="c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.226308 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de"} err="failed to get container status \"c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de\": rpc error: code = NotFound desc = could not find container \"c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de\": container with ID starting with c40fb8f56e1f651727497235590ab52179b733caa8e80f33420054ea646288de not found: ID does not exist" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.226337 4985 scope.go:117] "RemoveContainer" containerID="79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299" Jan 27 08:57:31 crc kubenswrapper[4985]: E0127 08:57:31.227611 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299\": container with ID starting with 79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299 not found: ID does not exist" containerID="79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299" Jan 27 08:57:31 crc kubenswrapper[4985]: I0127 08:57:31.227643 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299"} err="failed to get container status \"79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299\": rpc error: code = NotFound desc = could not find container \"79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299\": container with ID starting with 79e7e80f963e8d582b5b4f72ddb03fcbc7d41781d7bd5a31f0c4ea0660074299 not found: ID does not exist" Jan 27 08:57:32 crc kubenswrapper[4985]: I0127 08:57:32.463420 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" path="/var/lib/kubelet/pods/6b2d7f94-92b7-4593-8496-31db09afdf39/volumes" Jan 27 08:57:32 crc kubenswrapper[4985]: I0127 08:57:32.628217 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" containerID="cri-o://221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf" gracePeriod=15 Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.030667 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.031064 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7k25x" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="registry-server" containerID="cri-o://eae09ffc6f882532b32962ae46a36b71b9cf27fc87f998818afda734848d7b41" gracePeriod=2 Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.139879 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148066 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148122 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148197 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148202 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.148274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtnrw\" (UniqueName: \"kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149035 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149141 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149214 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149573 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149640 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149680 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149970 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection\") pod \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\" (UID: \"5e3df4d8-af39-4eb4-b2c7-5127144a44a6\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.149994 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.150454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.151521 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.151537 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.151548 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.151562 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.151574 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.163333 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.163664 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.163913 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.165450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.166864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.169808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw" (OuterVolumeSpecName: "kube-api-access-jtnrw") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "kube-api-access-jtnrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.170961 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.171676 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.171871 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5e3df4d8-af39-4eb4-b2c7-5127144a44a6" (UID: "5e3df4d8-af39-4eb4-b2c7-5127144a44a6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.173814 4985 generic.go:334] "Generic (PLEG): container finished" podID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerID="221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf" exitCode=0 Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.173908 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.173948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" event={"ID":"5e3df4d8-af39-4eb4-b2c7-5127144a44a6","Type":"ContainerDied","Data":"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf"} Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.173988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t4tc7" event={"ID":"5e3df4d8-af39-4eb4-b2c7-5127144a44a6","Type":"ContainerDied","Data":"1331787492cf06756a2f2fcfeb697f0501794c1b710d6a7ac14550417b8db490"} Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.174010 4985 scope.go:117] "RemoveContainer" containerID="221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.205364 4985 generic.go:334] "Generic (PLEG): container finished" podID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerID="eae09ffc6f882532b32962ae46a36b71b9cf27fc87f998818afda734848d7b41" exitCode=0 Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.205562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerDied","Data":"eae09ffc6f882532b32962ae46a36b71b9cf27fc87f998818afda734848d7b41"} Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.239965 4985 scope.go:117] "RemoveContainer" containerID="221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf" Jan 27 08:57:33 crc kubenswrapper[4985]: E0127 08:57:33.240688 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf\": container with ID starting with 221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf not found: ID does not exist" containerID="221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.240750 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf"} err="failed to get container status \"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf\": rpc error: code = NotFound desc = could not find container \"221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf\": container with ID starting with 221f468cea907b4f9491eb6549aeecf3630a6dda8f0f36ae5cc0ed343406c1cf not found: ID does not exist" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.240870 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253504 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtnrw\" (UniqueName: \"kubernetes.io/projected/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-kube-api-access-jtnrw\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253774 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253872 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253940 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253999 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.254057 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.254116 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.254184 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.254246 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e3df4d8-af39-4eb4-b2c7-5127144a44a6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.253533 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t4tc7"] Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.567305 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.659846 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkfwz\" (UniqueName: \"kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz\") pod \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.659981 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content\") pod \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.660100 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities\") pod \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\" (UID: \"7aadedcd-5a47-4d8d-a41d-e33a7a760331\") " Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.660908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities" (OuterVolumeSpecName: "utilities") pod "7aadedcd-5a47-4d8d-a41d-e33a7a760331" (UID: "7aadedcd-5a47-4d8d-a41d-e33a7a760331"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.663656 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz" (OuterVolumeSpecName: "kube-api-access-dkfwz") pod "7aadedcd-5a47-4d8d-a41d-e33a7a760331" (UID: "7aadedcd-5a47-4d8d-a41d-e33a7a760331"). InnerVolumeSpecName "kube-api-access-dkfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.761506 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.761738 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkfwz\" (UniqueName: \"kubernetes.io/projected/7aadedcd-5a47-4d8d-a41d-e33a7a760331-kube-api-access-dkfwz\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.777810 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aadedcd-5a47-4d8d-a41d-e33a7a760331" (UID: "7aadedcd-5a47-4d8d-a41d-e33a7a760331"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.864173 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aadedcd-5a47-4d8d-a41d-e33a7a760331-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.979846 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.980168 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerName="controller-manager" containerID="cri-o://c6c476ccb1d4130ce4e7be6a37c3215b95d2ca294223b09276e44843a9107f87" gracePeriod=30 Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.991131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:33 crc kubenswrapper[4985]: I0127 08:57:33.991413 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" podUID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" containerName="route-controller-manager" containerID="cri-o://117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4" gracePeriod=30 Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.214658 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k25x" event={"ID":"7aadedcd-5a47-4d8d-a41d-e33a7a760331","Type":"ContainerDied","Data":"3bf5dadf4e1046d1524a5c65b810d649333ea554a7dd6b45b898abfae7c95699"} Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.214741 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k25x" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.215036 4985 scope.go:117] "RemoveContainer" containerID="eae09ffc6f882532b32962ae46a36b71b9cf27fc87f998818afda734848d7b41" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.230772 4985 scope.go:117] "RemoveContainer" containerID="95691700c45ca7207fd70f0968c76a3707a5b163932c54d4edb75b443a3229b7" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.257887 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.261831 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7k25x"] Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.263735 4985 scope.go:117] "RemoveContainer" containerID="54a67813dfb128f628786a9e568ef62cad0fde383e778857eed15dcd8a2de1be" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.460326 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" path="/var/lib/kubelet/pods/5e3df4d8-af39-4eb4-b2c7-5127144a44a6/volumes" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.461705 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" path="/var/lib/kubelet/pods/7aadedcd-5a47-4d8d-a41d-e33a7a760331/volumes" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.917609 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.978019 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config\") pod \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.978174 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca\") pod \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.978230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgsb\" (UniqueName: \"kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb\") pod \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.978317 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert\") pod \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\" (UID: \"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d\") " Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.979146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca" (OuterVolumeSpecName: "client-ca") pod "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" (UID: "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.979268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config" (OuterVolumeSpecName: "config") pod "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" (UID: "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.983199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb" (OuterVolumeSpecName: "kube-api-access-5cgsb") pod "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" (UID: "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d"). InnerVolumeSpecName "kube-api-access-5cgsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:34 crc kubenswrapper[4985]: I0127 08:57:34.983632 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" (UID: "21a770bf-c6eb-4287-a160-2b6ba7ab4b7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.080406 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.080451 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cgsb\" (UniqueName: \"kubernetes.io/projected/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-kube-api-access-5cgsb\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.080466 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.080476 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190148 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf"] Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190385 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190399 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190408 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" containerName="route-controller-manager" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190413 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" containerName="route-controller-manager" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190425 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190431 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190439 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190448 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190456 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190463 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190471 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190478 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190489 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190496 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190521 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190528 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190538 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190544 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190550 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190556 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190564 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190569 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="extract-utilities" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190576 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190583 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190591 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190597 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.190604 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190610 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="extract-content" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190743 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" containerName="route-controller-manager" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190759 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2d7f94-92b7-4593-8496-31db09afdf39" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190772 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8d30fe-7369-4ea0-830d-b8fffca6bd10" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190784 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa17d66c-2d07-4ce5-bfc8-45bb31adf066" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190794 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aadedcd-5a47-4d8d-a41d-e33a7a760331" containerName="registry-server" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.190805 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3df4d8-af39-4eb4-b2c7-5127144a44a6" containerName="oauth-openshift" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.191243 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.206069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf"] Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.224307 4985 generic.go:334] "Generic (PLEG): container finished" podID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" containerID="117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4" exitCode=0 Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.224353 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" event={"ID":"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d","Type":"ContainerDied","Data":"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4"} Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.224390 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.224406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2" event={"ID":"21a770bf-c6eb-4287-a160-2b6ba7ab4b7d","Type":"ContainerDied","Data":"d0048480232552d26228bc5e9bef56e698989bd09116b79cbf5f784d49dde865"} Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.224430 4985 scope.go:117] "RemoveContainer" containerID="117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.226413 4985 generic.go:334] "Generic (PLEG): container finished" podID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerID="c6c476ccb1d4130ce4e7be6a37c3215b95d2ca294223b09276e44843a9107f87" exitCode=0 Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.226441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" event={"ID":"1b3e850a-df48-4fbe-bad9-087c34dbdc7d","Type":"ContainerDied","Data":"c6c476ccb1d4130ce4e7be6a37c3215b95d2ca294223b09276e44843a9107f87"} Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.246848 4985 scope.go:117] "RemoveContainer" containerID="117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4" Jan 27 08:57:35 crc kubenswrapper[4985]: E0127 08:57:35.249492 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4\": container with ID starting with 117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4 not found: ID does not exist" containerID="117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.249574 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4"} err="failed to get container status \"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4\": rpc error: code = NotFound desc = could not find container \"117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4\": container with ID starting with 117b622312c160ac253aae7c619274ca6761c69fe79d36f0a82297677d230dd4 not found: ID does not exist" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.259211 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.261949 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f76c9744-xqhw2"] Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.283567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa4c41-eb24-4d27-bbc1-d104efef8110-serving-cert\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.283656 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtrp\" (UniqueName: \"kubernetes.io/projected/43aa4c41-eb24-4d27-bbc1-d104efef8110-kube-api-access-cgtrp\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.283798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-config\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.283950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-client-ca\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.385179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtrp\" (UniqueName: \"kubernetes.io/projected/43aa4c41-eb24-4d27-bbc1-d104efef8110-kube-api-access-cgtrp\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.385297 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-config\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.386581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-client-ca\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.386817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-config\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.386902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43aa4c41-eb24-4d27-bbc1-d104efef8110-client-ca\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.387259 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa4c41-eb24-4d27-bbc1-d104efef8110-serving-cert\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.391215 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa4c41-eb24-4d27-bbc1-d104efef8110-serving-cert\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.403442 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtrp\" (UniqueName: \"kubernetes.io/projected/43aa4c41-eb24-4d27-bbc1-d104efef8110-kube-api-access-cgtrp\") pod \"route-controller-manager-5ff76f9d89-sjczf\" (UID: \"43aa4c41-eb24-4d27-bbc1-d104efef8110\") " pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.505689 4985 patch_prober.go:28] interesting pod/controller-manager-5cf8b98fd-lt7cv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.505761 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.508426 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.779782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:57:35 crc kubenswrapper[4985]: I0127 08:57:35.929212 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf"] Jan 27 08:57:35 crc kubenswrapper[4985]: W0127 08:57:35.940894 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43aa4c41_eb24_4d27_bbc1_d104efef8110.slice/crio-345c376cc9ae4109995018f21d80de5632ada27baa953b84582adb78f94970fb WatchSource:0}: Error finding container 345c376cc9ae4109995018f21d80de5632ada27baa953b84582adb78f94970fb: Status 404 returned error can't find the container with id 345c376cc9ae4109995018f21d80de5632ada27baa953b84582adb78f94970fb Jan 27 08:57:36 crc kubenswrapper[4985]: I0127 08:57:36.236526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" event={"ID":"43aa4c41-eb24-4d27-bbc1-d104efef8110","Type":"ContainerStarted","Data":"345c376cc9ae4109995018f21d80de5632ada27baa953b84582adb78f94970fb"} Jan 27 08:57:36 crc kubenswrapper[4985]: I0127 08:57:36.462782 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a770bf-c6eb-4287-a160-2b6ba7ab4b7d" path="/var/lib/kubelet/pods/21a770bf-c6eb-4287-a160-2b6ba7ab4b7d/volumes" Jan 27 08:57:38 crc kubenswrapper[4985]: I0127 08:57:38.988414 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:57:39 crc kubenswrapper[4985]: I0127 08:57:39.044651 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.194498 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9645b9d-n4lwx"] Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.195501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.198596 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.199026 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.199050 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.202366 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.203016 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.203565 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.204258 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.204273 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.204308 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.204434 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.206158 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.209003 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.217317 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9645b9d-n4lwx"] Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.217633 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.222156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.222170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256262 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-router-certs\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256352 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-dir\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256385 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44mh\" (UniqueName: \"kubernetes.io/projected/7a4eaec2-ccbf-455c-b143-fe3353d83859-kube-api-access-k44mh\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256430 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256462 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256531 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256574 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-policies\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256598 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-session\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-error\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256713 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-service-ca\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.256752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-login\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.358667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.358800 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-policies\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.358857 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-session\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.358900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-error\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.358959 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359007 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-service-ca\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-login\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359186 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-dir\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-router-certs\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359663 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44mh\" (UniqueName: \"kubernetes.io/projected/7a4eaec2-ccbf-455c-b143-fe3353d83859-kube-api-access-k44mh\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359730 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.359848 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.360035 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-dir\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.360132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-audit-policies\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.360725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.361661 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-service-ca\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.362296 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.366917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.366911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.366932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-session\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.367581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.368682 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-error\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.370256 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.371692 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-user-template-login\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.374631 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a4eaec2-ccbf-455c-b143-fe3353d83859-v4-0-config-system-router-certs\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.392442 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44mh\" (UniqueName: \"kubernetes.io/projected/7a4eaec2-ccbf-455c-b143-fe3353d83859-kube-api-access-k44mh\") pod \"oauth-openshift-9645b9d-n4lwx\" (UID: \"7a4eaec2-ccbf-455c-b143-fe3353d83859\") " pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:40 crc kubenswrapper[4985]: I0127 08:57:40.571649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.006161 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9645b9d-n4lwx"] Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.127624 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.160014 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl"] Jan 27 08:57:41 crc kubenswrapper[4985]: E0127 08:57:41.160291 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerName="controller-manager" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.160307 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerName="controller-manager" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.160422 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" containerName="controller-manager" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.160906 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.169077 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb14b72f-3069-48d8-8dad-25583d19350d-serving-cert\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.169166 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5twd\" (UniqueName: \"kubernetes.io/projected/cb14b72f-3069-48d8-8dad-25583d19350d-kube-api-access-j5twd\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.169188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-config\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.169215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-proxy-ca-bundles\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.169243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-client-ca\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.211378 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl"] Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270195 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config\") pod \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270636 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert\") pod \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxklf\" (UniqueName: \"kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf\") pod \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca\") pod \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles\") pod \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\" (UID: \"1b3e850a-df48-4fbe-bad9-087c34dbdc7d\") " Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270924 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb14b72f-3069-48d8-8dad-25583d19350d-serving-cert\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5twd\" (UniqueName: \"kubernetes.io/projected/cb14b72f-3069-48d8-8dad-25583d19350d-kube-api-access-j5twd\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.270991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-config\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.271016 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-proxy-ca-bundles\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.271043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-client-ca\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.272804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-client-ca\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.273183 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-proxy-ca-bundles\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.275718 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb14b72f-3069-48d8-8dad-25583d19350d-config\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.275815 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config" (OuterVolumeSpecName: "config") pod "1b3e850a-df48-4fbe-bad9-087c34dbdc7d" (UID: "1b3e850a-df48-4fbe-bad9-087c34dbdc7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.276150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1b3e850a-df48-4fbe-bad9-087c34dbdc7d" (UID: "1b3e850a-df48-4fbe-bad9-087c34dbdc7d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.276791 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b3e850a-df48-4fbe-bad9-087c34dbdc7d" (UID: "1b3e850a-df48-4fbe-bad9-087c34dbdc7d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.278042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b3e850a-df48-4fbe-bad9-087c34dbdc7d" (UID: "1b3e850a-df48-4fbe-bad9-087c34dbdc7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.280209 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" event={"ID":"43aa4c41-eb24-4d27-bbc1-d104efef8110","Type":"ContainerStarted","Data":"0471714166f84421f567c6e8e8b76f4cac84a005de1384486f9622769eaaca58"} Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.280207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf" (OuterVolumeSpecName: "kube-api-access-dxklf") pod "1b3e850a-df48-4fbe-bad9-087c34dbdc7d" (UID: "1b3e850a-df48-4fbe-bad9-087c34dbdc7d"). InnerVolumeSpecName "kube-api-access-dxklf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.280430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.281949 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" event={"ID":"7a4eaec2-ccbf-455c-b143-fe3353d83859","Type":"ContainerStarted","Data":"e1750c2faa09d6e6cafc518bab41292900745cdbf0a31d2cc5146c61170d6573"} Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.283207 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb14b72f-3069-48d8-8dad-25583d19350d-serving-cert\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.284343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" event={"ID":"1b3e850a-df48-4fbe-bad9-087c34dbdc7d","Type":"ContainerDied","Data":"672e84929a4c3dd937033895f4dc34674d6fa40c32d61d0f1b483db0d88425f7"} Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.284385 4985 scope.go:117] "RemoveContainer" containerID="c6c476ccb1d4130ce4e7be6a37c3215b95d2ca294223b09276e44843a9107f87" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.284746 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.290211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5twd\" (UniqueName: \"kubernetes.io/projected/cb14b72f-3069-48d8-8dad-25583d19350d-kube-api-access-j5twd\") pod \"controller-manager-7cbcf9f64f-2hrwl\" (UID: \"cb14b72f-3069-48d8-8dad-25583d19350d\") " pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.316747 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" podStartSLOduration=7.316712498 podStartE2EDuration="7.316712498s" podCreationTimestamp="2026-01-27 08:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:41.31534662 +0000 UTC m=+245.606441471" watchObservedRunningTime="2026-01-27 08:57:41.316712498 +0000 UTC m=+245.607807329" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.338051 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.342679 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8b98fd-lt7cv"] Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.383401 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.383459 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.383504 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.383589 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.383636 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxklf\" (UniqueName: \"kubernetes.io/projected/1b3e850a-df48-4fbe-bad9-087c34dbdc7d-kube-api-access-dxklf\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.483411 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:41 crc kubenswrapper[4985]: I0127 08:57:41.549125 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff76f9d89-sjczf" Jan 27 08:57:42 crc kubenswrapper[4985]: I0127 08:57:42.069506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl"] Jan 27 08:57:42 crc kubenswrapper[4985]: I0127 08:57:42.294883 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" event={"ID":"cb14b72f-3069-48d8-8dad-25583d19350d","Type":"ContainerStarted","Data":"e9fe2a0912ff8e91f76c3bc1c83608adca217cf3b2cbf772cec77e34655f29e1"} Jan 27 08:57:42 crc kubenswrapper[4985]: I0127 08:57:42.296703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" event={"ID":"7a4eaec2-ccbf-455c-b143-fe3353d83859","Type":"ContainerStarted","Data":"f9471cfe03d649b975e674c48170ace44dbb9777d1afd002529fb9330786ce80"} Jan 27 08:57:42 crc kubenswrapper[4985]: I0127 08:57:42.322050 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" podStartSLOduration=35.322016118 podStartE2EDuration="35.322016118s" podCreationTimestamp="2026-01-27 08:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:42.318358816 +0000 UTC m=+246.609453667" watchObservedRunningTime="2026-01-27 08:57:42.322016118 +0000 UTC m=+246.613110959" Jan 27 08:57:42 crc kubenswrapper[4985]: I0127 08:57:42.461098 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3e850a-df48-4fbe-bad9-087c34dbdc7d" path="/var/lib/kubelet/pods/1b3e850a-df48-4fbe-bad9-087c34dbdc7d/volumes" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.304092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" event={"ID":"cb14b72f-3069-48d8-8dad-25583d19350d","Type":"ContainerStarted","Data":"91e8038f64163c543b59ed39dcfe8963779a24e863f652a7bb770b01cb0da59e"} Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.304799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.310352 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9645b9d-n4lwx" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.326746 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" podStartSLOduration=9.326712971 podStartE2EDuration="9.326712971s" podCreationTimestamp="2026-01-27 08:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:57:43.323311527 +0000 UTC m=+247.614406378" watchObservedRunningTime="2026-01-27 08:57:43.326712971 +0000 UTC m=+247.617807812" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.801315 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802167 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802289 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802457 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6" gracePeriod=15 Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802579 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63" gracePeriod=15 Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802634 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe" gracePeriod=15 Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802635 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412" gracePeriod=15 Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.802683 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93" gracePeriod=15 Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807066 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807423 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807456 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807469 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807481 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807492 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807534 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807554 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807564 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807578 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807587 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807598 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807605 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 08:57:43 crc kubenswrapper[4985]: E0127 08:57:43.807637 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807645 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807801 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807825 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807836 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807847 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.807860 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.808143 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.845847 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.845905 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.845948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.845975 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.846002 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.846042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.846058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.846238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947198 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947244 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947300 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947363 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947450 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947495 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947688 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947697 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:43 crc kubenswrapper[4985]: I0127 08:57:43.947725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.313854 4985 generic.go:334] "Generic (PLEG): container finished" podID="70566b0c-fabd-4c21-bf39-f772dee30b6a" containerID="3c16c5009496f35daacaf7d1d82d1e906f0f258812ca83eff9aefeba7868b129" exitCode=0 Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.314309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70566b0c-fabd-4c21-bf39-f772dee30b6a","Type":"ContainerDied","Data":"3c16c5009496f35daacaf7d1d82d1e906f0f258812ca83eff9aefeba7868b129"} Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.315053 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.315211 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.317148 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.318392 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319005 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93" exitCode=0 Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319028 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63" exitCode=0 Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319038 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412" exitCode=0 Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319047 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe" exitCode=2 Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319066 4985 scope.go:117] "RemoveContainer" containerID="b491f71d942363b5d2bab13a5219b0eb9b7f7617c0978bfe012946aa241a13c1" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.319656 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.324626 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.324973 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.325126 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:44 crc kubenswrapper[4985]: I0127 08:57:44.325280 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:44 crc kubenswrapper[4985]: E0127 08:57:44.533347 4985 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" volumeName="registry-storage" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.331212 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.627865 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.628755 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.629267 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.674175 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock\") pod \"70566b0c-fabd-4c21-bf39-f772dee30b6a\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.674366 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock" (OuterVolumeSpecName: "var-lock") pod "70566b0c-fabd-4c21-bf39-f772dee30b6a" (UID: "70566b0c-fabd-4c21-bf39-f772dee30b6a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.674470 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir\") pod \"70566b0c-fabd-4c21-bf39-f772dee30b6a\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.674619 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70566b0c-fabd-4c21-bf39-f772dee30b6a" (UID: "70566b0c-fabd-4c21-bf39-f772dee30b6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.674587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access\") pod \"70566b0c-fabd-4c21-bf39-f772dee30b6a\" (UID: \"70566b0c-fabd-4c21-bf39-f772dee30b6a\") " Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.676279 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.676332 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70566b0c-fabd-4c21-bf39-f772dee30b6a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.685189 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70566b0c-fabd-4c21-bf39-f772dee30b6a" (UID: "70566b0c-fabd-4c21-bf39-f772dee30b6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:57:45 crc kubenswrapper[4985]: I0127 08:57:45.777651 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70566b0c-fabd-4c21-bf39-f772dee30b6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.193791 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.194676 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.195309 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.195649 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.195975 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.196008 4985 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.197979 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.198018 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.198963 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.199382 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.199628 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.199919 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.286688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.286759 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.286820 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.286898 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.286863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.287010 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.287109 4985 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.287125 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.287135 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.341915 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.343097 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6" exitCode=0 Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.343189 4985 scope.go:117] "RemoveContainer" containerID="dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.343242 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.345631 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.345614 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70566b0c-fabd-4c21-bf39-f772dee30b6a","Type":"ContainerDied","Data":"74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a"} Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.345863 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74da2d8709f236a9637904f5bc65c5bc5f09279e5afdb0636cb5ff3b5766014a" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.361317 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.361495 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.362014 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.363144 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.363315 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.363654 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.368144 4985 scope.go:117] "RemoveContainer" containerID="b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.382237 4985 scope.go:117] "RemoveContainer" containerID="ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.399031 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.399369 4985 scope.go:117] "RemoveContainer" containerID="0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.413649 4985 scope.go:117] "RemoveContainer" containerID="8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.431124 4985 scope.go:117] "RemoveContainer" containerID="22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.454935 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.455296 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.455460 4985 scope.go:117] "RemoveContainer" containerID="dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.455569 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.455889 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\": container with ID starting with dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93 not found: ID does not exist" containerID="dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.455925 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93"} err="failed to get container status \"dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\": rpc error: code = NotFound desc = could not find container \"dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93\": container with ID starting with dd67389a909eaadfb147d17b98a2147bb96aea606ba994568389cb5be9ee6f93 not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.455948 4985 scope.go:117] "RemoveContainer" containerID="b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.456341 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\": container with ID starting with b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63 not found: ID does not exist" containerID="b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.456468 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63"} err="failed to get container status \"b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\": rpc error: code = NotFound desc = could not find container \"b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63\": container with ID starting with b09da7bae5a0f15ab2c5463373193603d1412170828b99490007cee7b067df63 not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.456615 4985 scope.go:117] "RemoveContainer" containerID="ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.457250 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\": container with ID starting with ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412 not found: ID does not exist" containerID="ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.457280 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412"} err="failed to get container status \"ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\": rpc error: code = NotFound desc = could not find container \"ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412\": container with ID starting with ba179c4dde0de76f4a8dbfb8ceb98d2e0dbead5e955bd71171914fd913894412 not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.457298 4985 scope.go:117] "RemoveContainer" containerID="0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.457939 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\": container with ID starting with 0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe not found: ID does not exist" containerID="0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.457960 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe"} err="failed to get container status \"0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\": rpc error: code = NotFound desc = could not find container \"0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe\": container with ID starting with 0dd58dd369dc5654aa8ab2d34c77a69607b51e97ae36947b88caf6a2de7c5fbe not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.457973 4985 scope.go:117] "RemoveContainer" containerID="8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.459375 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\": container with ID starting with 8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6 not found: ID does not exist" containerID="8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.459402 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6"} err="failed to get container status \"8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\": rpc error: code = NotFound desc = could not find container \"8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6\": container with ID starting with 8f776820a00f7ca6b6867a188ee633debccb2bc22aaff8c659e452c2667933e6 not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.459415 4985 scope.go:117] "RemoveContainer" containerID="22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.460725 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\": container with ID starting with 22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739 not found: ID does not exist" containerID="22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.460758 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739"} err="failed to get container status \"22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\": rpc error: code = NotFound desc = could not find container \"22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739\": container with ID starting with 22633088f0a44b4fb15ff2e7181263bd4217aef7cc7a75072abfdba3ae883739 not found: ID does not exist" Jan 27 08:57:46 crc kubenswrapper[4985]: I0127 08:57:46.461001 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 08:57:46 crc kubenswrapper[4985]: E0127 08:57:46.799538 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Jan 27 08:57:47 crc kubenswrapper[4985]: E0127 08:57:47.600301 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Jan 27 08:57:48 crc kubenswrapper[4985]: E0127 08:57:48.838111 4985 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:48 crc kubenswrapper[4985]: I0127 08:57:48.839499 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:48 crc kubenswrapper[4985]: E0127 08:57:48.898173 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8ac1e0258aab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 08:57:48.897061547 +0000 UTC m=+253.188156438,LastTimestamp:2026-01-27 08:57:48.897061547 +0000 UTC m=+253.188156438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 08:57:49 crc kubenswrapper[4985]: E0127 08:57:49.201896 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Jan 27 08:57:49 crc kubenswrapper[4985]: I0127 08:57:49.363100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"78918556856bd7943e4af1cdbb666d28b3bf63e3385b8f4c291897f9d50a4f50"} Jan 27 08:57:49 crc kubenswrapper[4985]: I0127 08:57:49.363174 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a8d3d87d5df9ac5bce370c9478e2c0e5f06d78357ca8866d25e186a19779248d"} Jan 27 08:57:49 crc kubenswrapper[4985]: I0127 08:57:49.364087 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:49 crc kubenswrapper[4985]: E0127 08:57:49.364147 4985 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:57:49 crc kubenswrapper[4985]: I0127 08:57:49.364503 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:52 crc kubenswrapper[4985]: E0127 08:57:52.403608 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="6.4s" Jan 27 08:57:56 crc kubenswrapper[4985]: E0127 08:57:56.275166 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8ac1e0258aab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 08:57:48.897061547 +0000 UTC m=+253.188156438,LastTimestamp:2026-01-27 08:57:48.897061547 +0000 UTC m=+253.188156438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 08:57:56 crc kubenswrapper[4985]: I0127 08:57:56.455570 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:56 crc kubenswrapper[4985]: I0127 08:57:56.456142 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.419463 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.419599 4985 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec" exitCode=1 Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.419655 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec"} Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.420344 4985 scope.go:117] "RemoveContainer" containerID="c207bc405194647411332c492e34b0c95d3321cf626a57a4a750fbcea34f54ec" Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.420892 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.421583 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:57 crc kubenswrapper[4985]: I0127 08:57:57.422293 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.429382 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.429981 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f62734f0905d920ddfdccd8fa4ee852f88ff5335289742106b89cac26f20522"} Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.430918 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.431455 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.431972 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.451450 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.452337 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.452957 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.453484 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.465921 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.465952 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:57:58 crc kubenswrapper[4985]: E0127 08:57:58.466375 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:58 crc kubenswrapper[4985]: I0127 08:57:58.466925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:58 crc kubenswrapper[4985]: W0127 08:57:58.490744 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a343a9cfaf10ee6f8313feabe55cd028fe0cc417b51218337724e77d8d1e666d WatchSource:0}: Error finding container a343a9cfaf10ee6f8313feabe55cd028fe0cc417b51218337724e77d8d1e666d: Status 404 returned error can't find the container with id a343a9cfaf10ee6f8313feabe55cd028fe0cc417b51218337724e77d8d1e666d Jan 27 08:57:58 crc kubenswrapper[4985]: E0127 08:57:58.806130 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="7s" Jan 27 08:57:58 crc kubenswrapper[4985]: E0127 08:57:58.818099 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-conmon-74ed397024e08e90fc7e44590926306ebd14856623872cc31bb7e062af075965.scope\": RecentStats: unable to find data in memory cache]" Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.438296 4985 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="74ed397024e08e90fc7e44590926306ebd14856623872cc31bb7e062af075965" exitCode=0 Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.438362 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"74ed397024e08e90fc7e44590926306ebd14856623872cc31bb7e062af075965"} Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.438395 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a343a9cfaf10ee6f8313feabe55cd028fe0cc417b51218337724e77d8d1e666d"} Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.438726 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.438740 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:57:59 crc kubenswrapper[4985]: E0127 08:57:59.439392 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.439560 4985 status_manager.go:851] "Failed to get status for pod" podUID="cb14b72f-3069-48d8-8dad-25583d19350d" pod="openshift-controller-manager/controller-manager-7cbcf9f64f-2hrwl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7cbcf9f64f-2hrwl\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.439955 4985 status_manager.go:851] "Failed to get status for pod" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:57:59 crc kubenswrapper[4985]: I0127 08:57:59.440723 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 27 08:58:00 crc kubenswrapper[4985]: I0127 08:58:00.449450 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d03977506819b58dfc1d7632801c755bc418a89fe8530853ad7bfb3e10d89bec"} Jan 27 08:58:00 crc kubenswrapper[4985]: I0127 08:58:00.449508 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c196b9cf1ecc9bdfae727f717abba9d76b70c936194fd1ee48d918f1f2c1282"} Jan 27 08:58:00 crc kubenswrapper[4985]: I0127 08:58:00.449556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7308e96c4cdf1af05fe0a1ef59016dc0e931441fd60dfefcf3e0d73188d1c092"} Jan 27 08:58:01 crc kubenswrapper[4985]: I0127 08:58:01.464638 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5772f499c66ebc1866674ef77da0a3407b93f74eb8628d508b63aa150354de8"} Jan 27 08:58:01 crc kubenswrapper[4985]: I0127 08:58:01.466720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:01 crc kubenswrapper[4985]: I0127 08:58:01.466753 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f85773ca525dbe6f49025f6099689b2f748a236d66bf37ec681faa3c06e8cd6e"} Jan 27 08:58:01 crc kubenswrapper[4985]: I0127 08:58:01.465680 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:01 crc kubenswrapper[4985]: I0127 08:58:01.466826 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:02 crc kubenswrapper[4985]: I0127 08:58:02.196745 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:58:02 crc kubenswrapper[4985]: I0127 08:58:02.202311 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:58:02 crc kubenswrapper[4985]: I0127 08:58:02.469384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:58:03 crc kubenswrapper[4985]: I0127 08:58:03.467901 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:03 crc kubenswrapper[4985]: I0127 08:58:03.467998 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:03 crc kubenswrapper[4985]: I0127 08:58:03.477123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:06 crc kubenswrapper[4985]: I0127 08:58:06.627814 4985 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:06 crc kubenswrapper[4985]: I0127 08:58:06.796942 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e8689cab-acf7-4639-bba6-b6cbe119af12" Jan 27 08:58:07 crc kubenswrapper[4985]: I0127 08:58:07.501440 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:07 crc kubenswrapper[4985]: I0127 08:58:07.501486 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:07 crc kubenswrapper[4985]: I0127 08:58:07.506657 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e8689cab-acf7-4639-bba6-b6cbe119af12" Jan 27 08:58:07 crc kubenswrapper[4985]: I0127 08:58:07.510120 4985 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://7308e96c4cdf1af05fe0a1ef59016dc0e931441fd60dfefcf3e0d73188d1c092" Jan 27 08:58:07 crc kubenswrapper[4985]: I0127 08:58:07.510152 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:08 crc kubenswrapper[4985]: I0127 08:58:08.508142 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:08 crc kubenswrapper[4985]: I0127 08:58:08.508221 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9155a9e-2bdf-4f6f-baa7-0d7c6baac37d" Jan 27 08:58:08 crc kubenswrapper[4985]: I0127 08:58:08.511227 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e8689cab-acf7-4639-bba6-b6cbe119af12" Jan 27 08:58:14 crc kubenswrapper[4985]: I0127 08:58:14.427857 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 08:58:16 crc kubenswrapper[4985]: I0127 08:58:16.283106 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 08:58:16 crc kubenswrapper[4985]: I0127 08:58:16.956561 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 08:58:17 crc kubenswrapper[4985]: I0127 08:58:17.010794 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 08:58:17 crc kubenswrapper[4985]: I0127 08:58:17.065768 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 08:58:17 crc kubenswrapper[4985]: I0127 08:58:17.802216 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 08:58:17 crc kubenswrapper[4985]: I0127 08:58:17.870082 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 08:58:17 crc kubenswrapper[4985]: I0127 08:58:17.905926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.065115 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.106894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.196680 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.324633 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.403349 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.453226 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.522312 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.728554 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 08:58:18 crc kubenswrapper[4985]: I0127 08:58:18.905870 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.077613 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.393477 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.513154 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.578191 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.603464 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.658189 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.752590 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.793214 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.796996 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.806560 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.905797 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.938464 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 08:58:19 crc kubenswrapper[4985]: I0127 08:58:19.964300 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.149579 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.248108 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.316220 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.356032 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.380789 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.408891 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.517249 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.556289 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.583471 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.821926 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.853422 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.864845 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.864909 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.890027 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 08:58:20 crc kubenswrapper[4985]: I0127 08:58:20.929817 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.005443 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.023920 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.093448 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.165639 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.217301 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.235256 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.358346 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.364450 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.393797 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.448902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.510794 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.544913 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.564943 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.609906 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.688101 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.739128 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.755452 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.762508 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.778795 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.792190 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.834293 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.941844 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 08:58:21 crc kubenswrapper[4985]: I0127 08:58:21.990968 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.008892 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.084054 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.143751 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.192595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.233945 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.247896 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.303020 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.389712 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.409998 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.426567 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.447756 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.498248 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.537352 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.668472 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.752858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.790230 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.790571 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.820384 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.825296 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.841756 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.900645 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 08:58:22 crc kubenswrapper[4985]: I0127 08:58:22.993267 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.020565 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.022546 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.025956 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.078861 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.245402 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.284897 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.328292 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.366451 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.373762 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.625394 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.663054 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.671188 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.675454 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.684309 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.737248 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.787234 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.870773 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 08:58:23 crc kubenswrapper[4985]: I0127 08:58:23.938463 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.009914 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.013596 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.058744 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.296080 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.330766 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.393441 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.511848 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.566784 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.587459 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.665149 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.718611 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.802944 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.816898 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.940221 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.971649 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.972995 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 08:58:24 crc kubenswrapper[4985]: I0127 08:58:24.986100 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.068830 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.133874 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.222650 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.269334 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.290992 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.307550 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.377070 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.415763 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.420784 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.424264 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.517968 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.527281 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.563411 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.597478 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.612370 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.647360 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.648042 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.649366 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.652830 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.655654 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.682762 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.698909 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.740903 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.815783 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 08:58:25 crc kubenswrapper[4985]: I0127 08:58:25.974821 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.041433 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.080780 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.181727 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.257872 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.399454 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.428277 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.460345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.527858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.702888 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.747303 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.768249 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 08:58:26 crc kubenswrapper[4985]: I0127 08:58:26.945023 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.063526 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.108715 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.252637 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.282292 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.430766 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.432383 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.621171 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.696682 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.697422 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.825439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.826602 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 08:58:27 crc kubenswrapper[4985]: I0127 08:58:27.872227 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.065310 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.176812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.197929 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.203317 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.306345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.401127 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.416309 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.450863 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.459614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.461118 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.461161 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.472097 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.486023 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.486000884 podStartE2EDuration="22.486000884s" podCreationTimestamp="2026-01-27 08:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:58:28.483597308 +0000 UTC m=+292.774692169" watchObservedRunningTime="2026-01-27 08:58:28.486000884 +0000 UTC m=+292.777095725" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.524681 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.526398 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.581443 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.596797 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.649367 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.671344 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.717439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.732241 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.803922 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.809619 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.814670 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.839071 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.845310 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.855110 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 08:58:28 crc kubenswrapper[4985]: I0127 08:58:28.904544 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.017051 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.018009 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.033456 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.042977 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.062988 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.079419 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.162861 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.177868 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.213041 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.229214 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.361213 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.361308 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.361493 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://78918556856bd7943e4af1cdbb666d28b3bf63e3385b8f4c291897f9d50a4f50" gracePeriod=5 Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.384795 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.415046 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.622358 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.636262 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.692891 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.710958 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.725837 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.733887 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.854741 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 08:58:29 crc kubenswrapper[4985]: I0127 08:58:29.997021 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.190903 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.236311 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.267424 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.291241 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.292414 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.309021 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.401887 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.566418 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.814382 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.866170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 08:58:30 crc kubenswrapper[4985]: I0127 08:58:30.909061 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.008259 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.278834 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.607976 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.667280 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.681595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.753169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.763455 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.838956 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.913392 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.914103 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 08:58:31 crc kubenswrapper[4985]: I0127 08:58:31.974488 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.174057 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.206800 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.282156 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.657955 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.676653 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.725014 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 08:58:32 crc kubenswrapper[4985]: I0127 08:58:32.794008 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 08:58:34 crc kubenswrapper[4985]: I0127 08:58:34.679639 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 08:58:34 crc kubenswrapper[4985]: I0127 08:58:34.679696 4985 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="78918556856bd7943e4af1cdbb666d28b3bf63e3385b8f4c291897f9d50a4f50" exitCode=137 Jan 27 08:58:34 crc kubenswrapper[4985]: I0127 08:58:34.961323 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 08:58:34 crc kubenswrapper[4985]: I0127 08:58:34.961438 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003124 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003212 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003261 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003289 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003348 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003382 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003441 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.003465 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.004219 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.004266 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.004290 4985 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.004304 4985 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.015709 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.105679 4985 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.687752 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.687868 4985 scope.go:117] "RemoveContainer" containerID="78918556856bd7943e4af1cdbb666d28b3bf63e3385b8f4c291897f9d50a4f50" Jan 27 08:58:35 crc kubenswrapper[4985]: I0127 08:58:35.687991 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 08:58:36 crc kubenswrapper[4985]: I0127 08:58:36.228694 4985 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 08:58:36 crc kubenswrapper[4985]: I0127 08:58:36.460644 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 08:58:46 crc kubenswrapper[4985]: I0127 08:58:46.772492 4985 generic.go:334] "Generic (PLEG): container finished" podID="aac8abbf-f011-4386-89ed-afc8d4879670" containerID="400f21cbbef961dd3dccbbb569297622b9284d0f12b21b74315e0e966bfdf9f9" exitCode=0 Jan 27 08:58:46 crc kubenswrapper[4985]: I0127 08:58:46.772577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerDied","Data":"400f21cbbef961dd3dccbbb569297622b9284d0f12b21b74315e0e966bfdf9f9"} Jan 27 08:58:46 crc kubenswrapper[4985]: I0127 08:58:46.774430 4985 scope.go:117] "RemoveContainer" containerID="400f21cbbef961dd3dccbbb569297622b9284d0f12b21b74315e0e966bfdf9f9" Jan 27 08:58:47 crc kubenswrapper[4985]: I0127 08:58:47.783196 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerStarted","Data":"4990f1e5bc8d5fe738c26a0caa957e800c174d05a69585b3f7d47f8ec2cc4cd8"} Jan 27 08:58:47 crc kubenswrapper[4985]: I0127 08:58:47.784068 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:58:47 crc kubenswrapper[4985]: I0127 08:58:47.786103 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:58:50 crc kubenswrapper[4985]: I0127 08:58:50.149737 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 08:58:53 crc kubenswrapper[4985]: I0127 08:58:53.811503 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.006890 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9m566"] Jan 27 08:58:59 crc kubenswrapper[4985]: E0127 08:58:59.007727 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.007749 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 08:58:59 crc kubenswrapper[4985]: E0127 08:58:59.007773 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" containerName="installer" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.007783 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" containerName="installer" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.007900 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="70566b0c-fabd-4c21-bf39-f772dee30b6a" containerName="installer" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.007926 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.008583 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.016164 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9m566"] Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-bound-sa-token\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150170 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6h5\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-kube-api-access-bj6h5\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150246 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-certificates\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150265 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-tls\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150328 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be448c6e-302e-446c-9eb3-9c8e59078f14-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150494 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-trusted-ca\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.150532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be448c6e-302e-446c-9eb3-9c8e59078f14-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.165062 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.180066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.251766 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-bound-sa-token\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.251875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6h5\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-kube-api-access-bj6h5\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.251926 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-certificates\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.251947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-tls\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.251972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be448c6e-302e-446c-9eb3-9c8e59078f14-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.252001 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-trusted-ca\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.252021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be448c6e-302e-446c-9eb3-9c8e59078f14-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.252749 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be448c6e-302e-446c-9eb3-9c8e59078f14-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.253857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-trusted-ca\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.253932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-certificates\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.260626 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be448c6e-302e-446c-9eb3-9c8e59078f14-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.264252 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-registry-tls\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.268274 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6h5\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-kube-api-access-bj6h5\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.269691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be448c6e-302e-446c-9eb3-9c8e59078f14-bound-sa-token\") pod \"image-registry-66df7c8f76-9m566\" (UID: \"be448c6e-302e-446c-9eb3-9c8e59078f14\") " pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.361062 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.810766 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9m566"] Jan 27 08:58:59 crc kubenswrapper[4985]: I0127 08:58:59.859348 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" event={"ID":"be448c6e-302e-446c-9eb3-9c8e59078f14","Type":"ContainerStarted","Data":"3bb2ab16c11b2dbaeb6018a4d7c58fdb2dc319d6a46c1d51df2a4a3b5fd3e24b"} Jan 27 08:59:00 crc kubenswrapper[4985]: I0127 08:59:00.138838 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 08:59:00 crc kubenswrapper[4985]: I0127 08:59:00.866593 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" event={"ID":"be448c6e-302e-446c-9eb3-9c8e59078f14","Type":"ContainerStarted","Data":"0304b0c2bdfd416c21c59f6204c8da311ea491a94314adb7c306e9cf938533a5"} Jan 27 08:59:00 crc kubenswrapper[4985]: I0127 08:59:00.867295 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:59:00 crc kubenswrapper[4985]: I0127 08:59:00.887777 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" podStartSLOduration=2.887748944 podStartE2EDuration="2.887748944s" podCreationTimestamp="2026-01-27 08:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:59:00.885137701 +0000 UTC m=+325.176232562" watchObservedRunningTime="2026-01-27 08:59:00.887748944 +0000 UTC m=+325.178843785" Jan 27 08:59:09 crc kubenswrapper[4985]: I0127 08:59:09.477168 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 08:59:19 crc kubenswrapper[4985]: I0127 08:59:19.371240 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9m566" Jan 27 08:59:19 crc kubenswrapper[4985]: I0127 08:59:19.433935 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.156617 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.158204 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7trj" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="registry-server" containerID="cri-o://a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" gracePeriod=30 Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.161877 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.162276 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f2gdx" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" containerID="cri-o://92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" gracePeriod=30 Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.178146 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.178437 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" containerID="cri-o://4990f1e5bc8d5fe738c26a0caa957e800c174d05a69585b3f7d47f8ec2cc4cd8" gracePeriod=30 Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.196252 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.196608 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lwlwv" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="registry-server" containerID="cri-o://91ecd2a2c2c600d35cf35173a4ed34cbc61fd738dec6ba7c7dcddb2fcda93bec" gracePeriod=30 Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.234038 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zm8w2"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.235638 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.251037 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.251461 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hwclt" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="registry-server" containerID="cri-o://080280e744c6d48ef75d96557069fd28ef9f8ed509a6a612dc18b51ade773982" gracePeriod=30 Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.261255 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zm8w2"] Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.339288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.339346 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr4rc\" (UniqueName: \"kubernetes.io/projected/795be290-3151-45f3-bdba-4a054aec68d9-kube-api-access-kr4rc\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.339400 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.440345 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.440405 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr4rc\" (UniqueName: \"kubernetes.io/projected/795be290-3151-45f3-bdba-4a054aec68d9-kube-api-access-kr4rc\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.440432 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.441897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.451280 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/795be290-3151-45f3-bdba-4a054aec68d9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.463369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr4rc\" (UniqueName: \"kubernetes.io/projected/795be290-3151-45f3-bdba-4a054aec68d9-kube-api-access-kr4rc\") pod \"marketplace-operator-79b997595-zm8w2\" (UID: \"795be290-3151-45f3-bdba-4a054aec68d9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: I0127 08:59:25.563009 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.735761 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6 is running failed: container process not found" containerID="a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.736845 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6 is running failed: container process not found" containerID="a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.737337 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6 is running failed: container process not found" containerID="a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.737366 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-q7trj" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="registry-server" Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.946098 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1 is running failed: container process not found" containerID="92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.947116 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1 is running failed: container process not found" containerID="92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.947717 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1 is running failed: container process not found" containerID="92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 08:59:25 crc kubenswrapper[4985]: E0127 08:59:25.947818 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-f2gdx" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.016641 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zm8w2"] Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.021355 4985 generic.go:334] "Generic (PLEG): container finished" podID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerID="080280e744c6d48ef75d96557069fd28ef9f8ed509a6a612dc18b51ade773982" exitCode=0 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.021432 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerDied","Data":"080280e744c6d48ef75d96557069fd28ef9f8ed509a6a612dc18b51ade773982"} Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.024484 4985 generic.go:334] "Generic (PLEG): container finished" podID="aac8abbf-f011-4386-89ed-afc8d4879670" containerID="4990f1e5bc8d5fe738c26a0caa957e800c174d05a69585b3f7d47f8ec2cc4cd8" exitCode=0 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.024600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerDied","Data":"4990f1e5bc8d5fe738c26a0caa957e800c174d05a69585b3f7d47f8ec2cc4cd8"} Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.024657 4985 scope.go:117] "RemoveContainer" containerID="400f21cbbef961dd3dccbbb569297622b9284d0f12b21b74315e0e966bfdf9f9" Jan 27 08:59:26 crc kubenswrapper[4985]: W0127 08:59:26.026001 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod795be290_3151_45f3_bdba_4a054aec68d9.slice/crio-ce74be4442c66cab8dc26f07e8e5704e4df478c2a73a0f88b53dcc781f2a12e9 WatchSource:0}: Error finding container ce74be4442c66cab8dc26f07e8e5704e4df478c2a73a0f88b53dcc781f2a12e9: Status 404 returned error can't find the container with id ce74be4442c66cab8dc26f07e8e5704e4df478c2a73a0f88b53dcc781f2a12e9 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.027393 4985 generic.go:334] "Generic (PLEG): container finished" podID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerID="91ecd2a2c2c600d35cf35173a4ed34cbc61fd738dec6ba7c7dcddb2fcda93bec" exitCode=0 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.027482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerDied","Data":"91ecd2a2c2c600d35cf35173a4ed34cbc61fd738dec6ba7c7dcddb2fcda93bec"} Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.030757 4985 generic.go:334] "Generic (PLEG): container finished" podID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerID="a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" exitCode=0 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.030836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerDied","Data":"a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6"} Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.033607 4985 generic.go:334] "Generic (PLEG): container finished" podID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerID="92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" exitCode=0 Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.033717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerDied","Data":"92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1"} Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.437260 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.532912 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.561716 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content\") pod \"e143ff56-0606-4500-bac1-21d0d3f607ee\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.561872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities\") pod \"e143ff56-0606-4500-bac1-21d0d3f607ee\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.561936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8m66\" (UniqueName: \"kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66\") pod \"e143ff56-0606-4500-bac1-21d0d3f607ee\" (UID: \"e143ff56-0606-4500-bac1-21d0d3f607ee\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.578939 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities" (OuterVolumeSpecName: "utilities") pod "e143ff56-0606-4500-bac1-21d0d3f607ee" (UID: "e143ff56-0606-4500-bac1-21d0d3f607ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.590217 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66" (OuterVolumeSpecName: "kube-api-access-c8m66") pod "e143ff56-0606-4500-bac1-21d0d3f607ee" (UID: "e143ff56-0606-4500-bac1-21d0d3f607ee"). InnerVolumeSpecName "kube-api-access-c8m66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.615176 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.620349 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.624444 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.639481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e143ff56-0606-4500-bac1-21d0d3f607ee" (UID: "e143ff56-0606-4500-bac1-21d0d3f607ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.662829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content\") pod \"ed57e787-5d65-4c3c-8a0f-f693481928ae\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.662934 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities\") pod \"ed57e787-5d65-4c3c-8a0f-f693481928ae\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.662988 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nmdk\" (UniqueName: \"kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk\") pod \"ed57e787-5d65-4c3c-8a0f-f693481928ae\" (UID: \"ed57e787-5d65-4c3c-8a0f-f693481928ae\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.663373 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8m66\" (UniqueName: \"kubernetes.io/projected/e143ff56-0606-4500-bac1-21d0d3f607ee-kube-api-access-c8m66\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.663402 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.663416 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e143ff56-0606-4500-bac1-21d0d3f607ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.665578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities" (OuterVolumeSpecName: "utilities") pod "ed57e787-5d65-4c3c-8a0f-f693481928ae" (UID: "ed57e787-5d65-4c3c-8a0f-f693481928ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.667193 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk" (OuterVolumeSpecName: "kube-api-access-2nmdk") pod "ed57e787-5d65-4c3c-8a0f-f693481928ae" (UID: "ed57e787-5d65-4c3c-8a0f-f693481928ae"). InnerVolumeSpecName "kube-api-access-2nmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764649 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content\") pod \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764727 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca\") pod \"aac8abbf-f011-4386-89ed-afc8d4879670\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities\") pod \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764814 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jprv\" (UniqueName: \"kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv\") pod \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764874 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content\") pod \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764896 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics\") pod \"aac8abbf-f011-4386-89ed-afc8d4879670\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764917 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t8r5\" (UniqueName: \"kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5\") pod \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\" (UID: \"da9958bf-bf1b-4894-96a8-18b5b9fa3d46\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.764938 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5dvw\" (UniqueName: \"kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw\") pod \"aac8abbf-f011-4386-89ed-afc8d4879670\" (UID: \"aac8abbf-f011-4386-89ed-afc8d4879670\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.765000 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities\") pod \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\" (UID: \"c4ea35ca-a06c-40d2-86c2-d2c0a99da089\") " Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.765282 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.765299 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nmdk\" (UniqueName: \"kubernetes.io/projected/ed57e787-5d65-4c3c-8a0f-f693481928ae-kube-api-access-2nmdk\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.766201 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities" (OuterVolumeSpecName: "utilities") pod "c4ea35ca-a06c-40d2-86c2-d2c0a99da089" (UID: "c4ea35ca-a06c-40d2-86c2-d2c0a99da089"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.766676 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aac8abbf-f011-4386-89ed-afc8d4879670" (UID: "aac8abbf-f011-4386-89ed-afc8d4879670"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.767765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities" (OuterVolumeSpecName: "utilities") pod "da9958bf-bf1b-4894-96a8-18b5b9fa3d46" (UID: "da9958bf-bf1b-4894-96a8-18b5b9fa3d46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.770357 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv" (OuterVolumeSpecName: "kube-api-access-4jprv") pod "c4ea35ca-a06c-40d2-86c2-d2c0a99da089" (UID: "c4ea35ca-a06c-40d2-86c2-d2c0a99da089"). InnerVolumeSpecName "kube-api-access-4jprv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.773075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5" (OuterVolumeSpecName: "kube-api-access-8t8r5") pod "da9958bf-bf1b-4894-96a8-18b5b9fa3d46" (UID: "da9958bf-bf1b-4894-96a8-18b5b9fa3d46"). InnerVolumeSpecName "kube-api-access-8t8r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.773074 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw" (OuterVolumeSpecName: "kube-api-access-q5dvw") pod "aac8abbf-f011-4386-89ed-afc8d4879670" (UID: "aac8abbf-f011-4386-89ed-afc8d4879670"). InnerVolumeSpecName "kube-api-access-q5dvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.773419 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aac8abbf-f011-4386-89ed-afc8d4879670" (UID: "aac8abbf-f011-4386-89ed-afc8d4879670"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.797649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4ea35ca-a06c-40d2-86c2-d2c0a99da089" (UID: "c4ea35ca-a06c-40d2-86c2-d2c0a99da089"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.825000 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da9958bf-bf1b-4894-96a8-18b5b9fa3d46" (UID: "da9958bf-bf1b-4894-96a8-18b5b9fa3d46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.829419 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed57e787-5d65-4c3c-8a0f-f693481928ae" (UID: "ed57e787-5d65-4c3c-8a0f-f693481928ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866271 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866315 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866329 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jprv\" (UniqueName: \"kubernetes.io/projected/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-kube-api-access-4jprv\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866341 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed57e787-5d65-4c3c-8a0f-f693481928ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866353 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866364 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aac8abbf-f011-4386-89ed-afc8d4879670-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866380 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t8r5\" (UniqueName: \"kubernetes.io/projected/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-kube-api-access-8t8r5\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866392 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5dvw\" (UniqueName: \"kubernetes.io/projected/aac8abbf-f011-4386-89ed-afc8d4879670-kube-api-access-q5dvw\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866406 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ea35ca-a06c-40d2-86c2-d2c0a99da089-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:26 crc kubenswrapper[4985]: I0127 08:59:26.866417 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9958bf-bf1b-4894-96a8-18b5b9fa3d46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.042143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" event={"ID":"795be290-3151-45f3-bdba-4a054aec68d9","Type":"ContainerStarted","Data":"cb8ee3bc72b66069d05e31b35283cc9110d93226d577ac25fd2d9be01dfed5fd"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.042204 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" event={"ID":"795be290-3151-45f3-bdba-4a054aec68d9","Type":"ContainerStarted","Data":"ce74be4442c66cab8dc26f07e8e5704e4df478c2a73a0f88b53dcc781f2a12e9"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.043078 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.046611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2gdx" event={"ID":"e143ff56-0606-4500-bac1-21d0d3f607ee","Type":"ContainerDied","Data":"b0b875207b824a6ccf973c251b6a5c2a46f2c961190a9e2e58f672a48747ee52"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.046688 4985 scope.go:117] "RemoveContainer" containerID="92948f1acfb3b6a1d256305249a74289e639ad6119d214c615864cef0f9ef3c1" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.046707 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2gdx" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.048063 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.063027 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwclt" event={"ID":"ed57e787-5d65-4c3c-8a0f-f693481928ae","Type":"ContainerDied","Data":"45775ccc64f8d422b7d95f294db833ce7f6fd3ba0a24dfb4bb3d5db98ea506ec"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.063153 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwclt" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.065813 4985 scope.go:117] "RemoveContainer" containerID="2ee2a45a10d2c59bcaebd52d6f4303a077a4d6c34f764594259a776d847d1984" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.072848 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" event={"ID":"aac8abbf-f011-4386-89ed-afc8d4879670","Type":"ContainerDied","Data":"d48c8cc75ef20d45f4103994bbf0f80d7fe50fc93302fee314a3116198138471"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.073019 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfmwq" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.077226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwlwv" event={"ID":"c4ea35ca-a06c-40d2-86c2-d2c0a99da089","Type":"ContainerDied","Data":"3ce9228b820fc27b3c292a019053f7b697aa2f5bd835f9348e7001701e7fdf77"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.077671 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwlwv" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.078593 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zm8w2" podStartSLOduration=2.078537201 podStartE2EDuration="2.078537201s" podCreationTimestamp="2026-01-27 08:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:59:27.067601225 +0000 UTC m=+351.358696086" watchObservedRunningTime="2026-01-27 08:59:27.078537201 +0000 UTC m=+351.369632042" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.081009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7trj" event={"ID":"da9958bf-bf1b-4894-96a8-18b5b9fa3d46","Type":"ContainerDied","Data":"594e1ec53817c94591ec6f9cd970228d75fe2d2364a8279e56a660b36bdb52b6"} Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.081127 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7trj" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.117500 4985 scope.go:117] "RemoveContainer" containerID="f8080f180d2debc4567066e11f07fa4963a09e2b847ac17ce7d09cffd6ef90f1" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.131699 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.135995 4985 scope.go:117] "RemoveContainer" containerID="080280e744c6d48ef75d96557069fd28ef9f8ed509a6a612dc18b51ade773982" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.155223 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f2gdx"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.155600 4985 scope.go:117] "RemoveContainer" containerID="00418bed6214ddcef0628809d34ff42d17282cb8d8351b70978a93ebfcde8e6a" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.168683 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.176244 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfmwq"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.180353 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.180426 4985 scope.go:117] "RemoveContainer" containerID="6da4f5b0d4edcac410fa7fc05d9a2adf47ff9d2e80f52fd1f9de826b819a8de8" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.182628 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwlwv"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.194502 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.197480 4985 scope.go:117] "RemoveContainer" containerID="4990f1e5bc8d5fe738c26a0caa957e800c174d05a69585b3f7d47f8ec2cc4cd8" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.206139 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hwclt"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.211634 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.214856 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7trj"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.215644 4985 scope.go:117] "RemoveContainer" containerID="91ecd2a2c2c600d35cf35173a4ed34cbc61fd738dec6ba7c7dcddb2fcda93bec" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.234418 4985 scope.go:117] "RemoveContainer" containerID="45acbdc1f00d1ea0e6723d7b0a657807eaab94575ab31eb35b86bc38aef4eeda" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.252032 4985 scope.go:117] "RemoveContainer" containerID="f92fdacd40ec95bf0dadbca5e186521bb92285e98b0e988bdf117f1ad8f55828" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.268115 4985 scope.go:117] "RemoveContainer" containerID="a0295c44c7df6619edc0aad660024b88888aa5d39f319364acb9e684720384f6" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.317624 4985 scope.go:117] "RemoveContainer" containerID="b83a8b6546b77fe300a9bc8ac8752fe8a69ce78e491c5f4bafeae1dae4b7cf35" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.333991 4985 scope.go:117] "RemoveContainer" containerID="a1753b595c13c29f6018ca0a00e2494de26b49c3655a89644f2fd73e54d01e99" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.573984 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sqmzg"] Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574345 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574370 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574389 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574404 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574422 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574436 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574485 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574498 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574550 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574565 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574584 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574599 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574618 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574629 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574647 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574660 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574676 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574688 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574711 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574723 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="extract-utilities" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574740 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574754 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574771 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574784 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574800 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574812 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="extract-content" Jan 27 08:59:27 crc kubenswrapper[4985]: E0127 08:59:27.574847 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.574860 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575065 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575089 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575105 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575124 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575142 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" containerName="registry-server" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.575158 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" containerName="marketplace-operator" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.576501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.579038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.581293 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqmzg"] Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.679533 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-catalog-content\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.679618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-utilities\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.679707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fsfg\" (UniqueName: \"kubernetes.io/projected/3d5830bf-84c1-46df-88b1-72400d395500-kube-api-access-6fsfg\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.781379 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fsfg\" (UniqueName: \"kubernetes.io/projected/3d5830bf-84c1-46df-88b1-72400d395500-kube-api-access-6fsfg\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.781456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-catalog-content\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.781528 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-utilities\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.782065 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-catalog-content\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.782172 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5830bf-84c1-46df-88b1-72400d395500-utilities\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.804668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fsfg\" (UniqueName: \"kubernetes.io/projected/3d5830bf-84c1-46df-88b1-72400d395500-kube-api-access-6fsfg\") pod \"community-operators-sqmzg\" (UID: \"3d5830bf-84c1-46df-88b1-72400d395500\") " pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:27 crc kubenswrapper[4985]: I0127 08:59:27.891850 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.323007 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqmzg"] Jan 27 08:59:28 crc kubenswrapper[4985]: W0127 08:59:28.329242 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d5830bf_84c1_46df_88b1_72400d395500.slice/crio-3defa42df3fab9d392323f40e52a83c64a8ca326f53deebe54ba3d62ed65bc99 WatchSource:0}: Error finding container 3defa42df3fab9d392323f40e52a83c64a8ca326f53deebe54ba3d62ed65bc99: Status 404 returned error can't find the container with id 3defa42df3fab9d392323f40e52a83c64a8ca326f53deebe54ba3d62ed65bc99 Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.463773 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac8abbf-f011-4386-89ed-afc8d4879670" path="/var/lib/kubelet/pods/aac8abbf-f011-4386-89ed-afc8d4879670/volumes" Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.464866 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ea35ca-a06c-40d2-86c2-d2c0a99da089" path="/var/lib/kubelet/pods/c4ea35ca-a06c-40d2-86c2-d2c0a99da089/volumes" Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.465524 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9958bf-bf1b-4894-96a8-18b5b9fa3d46" path="/var/lib/kubelet/pods/da9958bf-bf1b-4894-96a8-18b5b9fa3d46/volumes" Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.466524 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e143ff56-0606-4500-bac1-21d0d3f607ee" path="/var/lib/kubelet/pods/e143ff56-0606-4500-bac1-21d0d3f607ee/volumes" Jan 27 08:59:28 crc kubenswrapper[4985]: I0127 08:59:28.467214 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed57e787-5d65-4c3c-8a0f-f693481928ae" path="/var/lib/kubelet/pods/ed57e787-5d65-4c3c-8a0f-f693481928ae/volumes" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.104895 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d5830bf-84c1-46df-88b1-72400d395500" containerID="90f1f42f9319b645ddf4552fa1d641798e91cd17f7bb428fc2baa1739519d5e7" exitCode=0 Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.104996 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqmzg" event={"ID":"3d5830bf-84c1-46df-88b1-72400d395500","Type":"ContainerDied","Data":"90f1f42f9319b645ddf4552fa1d641798e91cd17f7bb428fc2baa1739519d5e7"} Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.105055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqmzg" event={"ID":"3d5830bf-84c1-46df-88b1-72400d395500","Type":"ContainerStarted","Data":"3defa42df3fab9d392323f40e52a83c64a8ca326f53deebe54ba3d62ed65bc99"} Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.368955 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kxpvj"] Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.370251 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.376164 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.393177 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxpvj"] Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.507445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk98v\" (UniqueName: \"kubernetes.io/projected/4335a2c0-14aa-4423-8527-22a5fe08f48d-kube-api-access-zk98v\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.507504 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-catalog-content\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.507565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-utilities\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.609630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk98v\" (UniqueName: \"kubernetes.io/projected/4335a2c0-14aa-4423-8527-22a5fe08f48d-kube-api-access-zk98v\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.609708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-catalog-content\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.609776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-utilities\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.610399 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-catalog-content\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.610607 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4335a2c0-14aa-4423-8527-22a5fe08f48d-utilities\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.633143 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk98v\" (UniqueName: \"kubernetes.io/projected/4335a2c0-14aa-4423-8527-22a5fe08f48d-kube-api-access-zk98v\") pod \"redhat-operators-kxpvj\" (UID: \"4335a2c0-14aa-4423-8527-22a5fe08f48d\") " pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.696014 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.974732 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tbrf"] Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.980507 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:29 crc kubenswrapper[4985]: I0127 08:59:29.987941 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.002423 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tbrf"] Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.134915 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-catalog-content\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.134993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mts8r\" (UniqueName: \"kubernetes.io/projected/f87933b8-24d4-4124-9902-29626502bb84-kube-api-access-mts8r\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.135044 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-utilities\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.160129 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxpvj"] Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.237178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-catalog-content\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.237261 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mts8r\" (UniqueName: \"kubernetes.io/projected/f87933b8-24d4-4124-9902-29626502bb84-kube-api-access-mts8r\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.237311 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-utilities\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.237918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-catalog-content\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.237990 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87933b8-24d4-4124-9902-29626502bb84-utilities\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.259448 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mts8r\" (UniqueName: \"kubernetes.io/projected/f87933b8-24d4-4124-9902-29626502bb84-kube-api-access-mts8r\") pod \"certified-operators-8tbrf\" (UID: \"f87933b8-24d4-4124-9902-29626502bb84\") " pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.310256 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:30 crc kubenswrapper[4985]: I0127 08:59:30.585346 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tbrf"] Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.121550 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d5830bf-84c1-46df-88b1-72400d395500" containerID="c3906b752ed35adf8c5a69b1a0e5c0b5916ef2abae1f18efb6271ca3697720d1" exitCode=0 Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.121626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqmzg" event={"ID":"3d5830bf-84c1-46df-88b1-72400d395500","Type":"ContainerDied","Data":"c3906b752ed35adf8c5a69b1a0e5c0b5916ef2abae1f18efb6271ca3697720d1"} Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.124870 4985 generic.go:334] "Generic (PLEG): container finished" podID="f87933b8-24d4-4124-9902-29626502bb84" containerID="5d75ebfe96dda30675210f8296e018faef1a75ae9e714b1fc64e52438bc2df53" exitCode=0 Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.124966 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tbrf" event={"ID":"f87933b8-24d4-4124-9902-29626502bb84","Type":"ContainerDied","Data":"5d75ebfe96dda30675210f8296e018faef1a75ae9e714b1fc64e52438bc2df53"} Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.125094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tbrf" event={"ID":"f87933b8-24d4-4124-9902-29626502bb84","Type":"ContainerStarted","Data":"aa62805190d982290526ccc157361e442a9610ea3d6da9839f58365ad2d27e78"} Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.134734 4985 generic.go:334] "Generic (PLEG): container finished" podID="4335a2c0-14aa-4423-8527-22a5fe08f48d" containerID="9eee795f744e6c61da1af358ebd347df4abf300256c36b7be910bcf1fba63f2d" exitCode=0 Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.134805 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxpvj" event={"ID":"4335a2c0-14aa-4423-8527-22a5fe08f48d","Type":"ContainerDied","Data":"9eee795f744e6c61da1af358ebd347df4abf300256c36b7be910bcf1fba63f2d"} Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.134853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxpvj" event={"ID":"4335a2c0-14aa-4423-8527-22a5fe08f48d","Type":"ContainerStarted","Data":"5b8c1764d097a48454b0c87928ded677a60b92961602793b0dc1311e4e0fc44f"} Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.772835 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvb7"] Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.773882 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.776901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.825656 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvb7"] Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.861964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4pf6\" (UniqueName: \"kubernetes.io/projected/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-kube-api-access-w4pf6\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.862059 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-catalog-content\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.862266 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-utilities\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.963704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4pf6\" (UniqueName: \"kubernetes.io/projected/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-kube-api-access-w4pf6\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.963826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-catalog-content\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.963858 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-utilities\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.964418 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-utilities\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.964562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-catalog-content\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:31 crc kubenswrapper[4985]: I0127 08:59:31.987496 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4pf6\" (UniqueName: \"kubernetes.io/projected/56c8c3f8-8727-4ae8-9e43-34fc282cbf9d-kube-api-access-w4pf6\") pod \"redhat-marketplace-qcvb7\" (UID: \"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d\") " pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:32 crc kubenswrapper[4985]: I0127 08:59:32.097393 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:33 crc kubenswrapper[4985]: I0127 08:59:33.011711 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvb7"] Jan 27 08:59:33 crc kubenswrapper[4985]: W0127 08:59:33.020957 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c8c3f8_8727_4ae8_9e43_34fc282cbf9d.slice/crio-4e519f82d4b75da651ad2deaeecbbb5b3ee13556ac5993825a0521004bb5403e WatchSource:0}: Error finding container 4e519f82d4b75da651ad2deaeecbbb5b3ee13556ac5993825a0521004bb5403e: Status 404 returned error can't find the container with id 4e519f82d4b75da651ad2deaeecbbb5b3ee13556ac5993825a0521004bb5403e Jan 27 08:59:33 crc kubenswrapper[4985]: I0127 08:59:33.148163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvb7" event={"ID":"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d","Type":"ContainerStarted","Data":"4e519f82d4b75da651ad2deaeecbbb5b3ee13556ac5993825a0521004bb5403e"} Jan 27 08:59:34 crc kubenswrapper[4985]: I0127 08:59:34.162076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqmzg" event={"ID":"3d5830bf-84c1-46df-88b1-72400d395500","Type":"ContainerStarted","Data":"14304f10c4e5c5612108c00910413c2b8e9249da21d2b31d5b1214ab10e321cc"} Jan 27 08:59:34 crc kubenswrapper[4985]: I0127 08:59:34.186351 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sqmzg" podStartSLOduration=3.680685982 podStartE2EDuration="7.186329302s" podCreationTimestamp="2026-01-27 08:59:27 +0000 UTC" firstStartedPulling="2026-01-27 08:59:29.106466381 +0000 UTC m=+353.397561222" lastFinishedPulling="2026-01-27 08:59:32.612109701 +0000 UTC m=+356.903204542" observedRunningTime="2026-01-27 08:59:34.185137649 +0000 UTC m=+358.476232490" watchObservedRunningTime="2026-01-27 08:59:34.186329302 +0000 UTC m=+358.477424143" Jan 27 08:59:35 crc kubenswrapper[4985]: I0127 08:59:35.169543 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxpvj" event={"ID":"4335a2c0-14aa-4423-8527-22a5fe08f48d","Type":"ContainerStarted","Data":"1c69619844cd01b7baacfcb2a7eaa05b0808a258dbb70c8f63487845e953d751"} Jan 27 08:59:35 crc kubenswrapper[4985]: I0127 08:59:35.171735 4985 generic.go:334] "Generic (PLEG): container finished" podID="56c8c3f8-8727-4ae8-9e43-34fc282cbf9d" containerID="5a6410b2a7eb1413234f6056812186d0313011ed54c1a92204f272b0ccc213e3" exitCode=0 Jan 27 08:59:35 crc kubenswrapper[4985]: I0127 08:59:35.171797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvb7" event={"ID":"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d","Type":"ContainerDied","Data":"5a6410b2a7eb1413234f6056812186d0313011ed54c1a92204f272b0ccc213e3"} Jan 27 08:59:35 crc kubenswrapper[4985]: I0127 08:59:35.175085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tbrf" event={"ID":"f87933b8-24d4-4124-9902-29626502bb84","Type":"ContainerStarted","Data":"26b1ef4662ba20b8dc9d3d56a4d65264ca47d2d5f1b5e227be975799190ab961"} Jan 27 08:59:36 crc kubenswrapper[4985]: I0127 08:59:36.186506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvb7" event={"ID":"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d","Type":"ContainerStarted","Data":"05db949110b56ae289ac79156237f07f7132bd2f90c7d8018f709851260f3854"} Jan 27 08:59:36 crc kubenswrapper[4985]: I0127 08:59:36.192612 4985 generic.go:334] "Generic (PLEG): container finished" podID="f87933b8-24d4-4124-9902-29626502bb84" containerID="26b1ef4662ba20b8dc9d3d56a4d65264ca47d2d5f1b5e227be975799190ab961" exitCode=0 Jan 27 08:59:36 crc kubenswrapper[4985]: I0127 08:59:36.192703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tbrf" event={"ID":"f87933b8-24d4-4124-9902-29626502bb84","Type":"ContainerDied","Data":"26b1ef4662ba20b8dc9d3d56a4d65264ca47d2d5f1b5e227be975799190ab961"} Jan 27 08:59:36 crc kubenswrapper[4985]: I0127 08:59:36.197118 4985 generic.go:334] "Generic (PLEG): container finished" podID="4335a2c0-14aa-4423-8527-22a5fe08f48d" containerID="1c69619844cd01b7baacfcb2a7eaa05b0808a258dbb70c8f63487845e953d751" exitCode=0 Jan 27 08:59:36 crc kubenswrapper[4985]: I0127 08:59:36.197165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxpvj" event={"ID":"4335a2c0-14aa-4423-8527-22a5fe08f48d","Type":"ContainerDied","Data":"1c69619844cd01b7baacfcb2a7eaa05b0808a258dbb70c8f63487845e953d751"} Jan 27 08:59:37 crc kubenswrapper[4985]: I0127 08:59:37.209967 4985 generic.go:334] "Generic (PLEG): container finished" podID="56c8c3f8-8727-4ae8-9e43-34fc282cbf9d" containerID="05db949110b56ae289ac79156237f07f7132bd2f90c7d8018f709851260f3854" exitCode=0 Jan 27 08:59:37 crc kubenswrapper[4985]: I0127 08:59:37.210502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvb7" event={"ID":"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d","Type":"ContainerDied","Data":"05db949110b56ae289ac79156237f07f7132bd2f90c7d8018f709851260f3854"} Jan 27 08:59:37 crc kubenswrapper[4985]: I0127 08:59:37.892898 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:37 crc kubenswrapper[4985]: I0127 08:59:37.893397 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:37 crc kubenswrapper[4985]: I0127 08:59:37.938137 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.220767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvb7" event={"ID":"56c8c3f8-8727-4ae8-9e43-34fc282cbf9d","Type":"ContainerStarted","Data":"3e35ac1caaae9195ebaafe305d043bcf1abb2d31c9685cf5d33f595be8426fa8"} Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.224124 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tbrf" event={"ID":"f87933b8-24d4-4124-9902-29626502bb84","Type":"ContainerStarted","Data":"89a84879acca1bba9e564b376d2106d8e338ffaa705c621b89e10ed2cadb0ff2"} Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.227116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxpvj" event={"ID":"4335a2c0-14aa-4423-8527-22a5fe08f48d","Type":"ContainerStarted","Data":"b84e20aed0bb51b83278442cbf36f7ff337f6b69cbc854a77b25c64ad69bb130"} Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.252992 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qcvb7" podStartSLOduration=4.806168701 podStartE2EDuration="7.252971304s" podCreationTimestamp="2026-01-27 08:59:31 +0000 UTC" firstStartedPulling="2026-01-27 08:59:35.17340868 +0000 UTC m=+359.464503521" lastFinishedPulling="2026-01-27 08:59:37.620211273 +0000 UTC m=+361.911306124" observedRunningTime="2026-01-27 08:59:38.243112308 +0000 UTC m=+362.534207149" watchObservedRunningTime="2026-01-27 08:59:38.252971304 +0000 UTC m=+362.544066145" Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.264671 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kxpvj" podStartSLOduration=3.443951706 podStartE2EDuration="9.264644331s" podCreationTimestamp="2026-01-27 08:59:29 +0000 UTC" firstStartedPulling="2026-01-27 08:59:31.136955763 +0000 UTC m=+355.428050604" lastFinishedPulling="2026-01-27 08:59:36.957648378 +0000 UTC m=+361.248743229" observedRunningTime="2026-01-27 08:59:38.263243602 +0000 UTC m=+362.554338463" watchObservedRunningTime="2026-01-27 08:59:38.264644331 +0000 UTC m=+362.555739172" Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.278426 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sqmzg" Jan 27 08:59:38 crc kubenswrapper[4985]: I0127 08:59:38.295865 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tbrf" podStartSLOduration=3.589477187 podStartE2EDuration="9.295836573s" podCreationTimestamp="2026-01-27 08:59:29 +0000 UTC" firstStartedPulling="2026-01-27 08:59:31.129575906 +0000 UTC m=+355.420670747" lastFinishedPulling="2026-01-27 08:59:36.835935292 +0000 UTC m=+361.127030133" observedRunningTime="2026-01-27 08:59:38.293390435 +0000 UTC m=+362.584485276" watchObservedRunningTime="2026-01-27 08:59:38.295836573 +0000 UTC m=+362.586931414" Jan 27 08:59:39 crc kubenswrapper[4985]: I0127 08:59:39.696778 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:39 crc kubenswrapper[4985]: I0127 08:59:39.697219 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:40 crc kubenswrapper[4985]: I0127 08:59:40.310674 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:40 crc kubenswrapper[4985]: I0127 08:59:40.310796 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:40 crc kubenswrapper[4985]: I0127 08:59:40.375748 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 08:59:40 crc kubenswrapper[4985]: I0127 08:59:40.735810 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxpvj" podUID="4335a2c0-14aa-4423-8527-22a5fe08f48d" containerName="registry-server" probeResult="failure" output=< Jan 27 08:59:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 08:59:40 crc kubenswrapper[4985]: > Jan 27 08:59:41 crc kubenswrapper[4985]: I0127 08:59:41.828759 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:59:41 crc kubenswrapper[4985]: I0127 08:59:41.828842 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:59:42 crc kubenswrapper[4985]: I0127 08:59:42.097800 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:42 crc kubenswrapper[4985]: I0127 08:59:42.097868 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:42 crc kubenswrapper[4985]: I0127 08:59:42.146328 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:42 crc kubenswrapper[4985]: I0127 08:59:42.288373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qcvb7" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.487825 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" podUID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" containerName="registry" containerID="cri-o://ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f" gracePeriod=30 Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.871692 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.956300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.956415 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.956471 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957410 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957472 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957561 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957805 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957868 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rgmd\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd\") pod \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\" (UID: \"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2\") " Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.957990 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.958454 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.958572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.964287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.964721 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd" (OuterVolumeSpecName: "kube-api-access-9rgmd") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "kube-api-access-9rgmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.965821 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.966203 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.973188 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 08:59:44 crc kubenswrapper[4985]: I0127 08:59:44.973552 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" (UID: "656a2bff-5cf1-426b-b879-a6c0cc1f4cb2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059829 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059881 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059897 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059909 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rgmd\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-kube-api-access-9rgmd\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059922 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.059935 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.265971 4985 generic.go:334] "Generic (PLEG): container finished" podID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" containerID="ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f" exitCode=0 Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.266020 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" event={"ID":"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2","Type":"ContainerDied","Data":"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f"} Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.266081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" event={"ID":"656a2bff-5cf1-426b-b879-a6c0cc1f4cb2","Type":"ContainerDied","Data":"147791a290959b12556a5be7e5879ed46b7491118051684eb251352edd9eb8a5"} Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.266077 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jn7wk" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.266102 4985 scope.go:117] "RemoveContainer" containerID="ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.284594 4985 scope.go:117] "RemoveContainer" containerID="ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f" Jan 27 08:59:45 crc kubenswrapper[4985]: E0127 08:59:45.285270 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f\": container with ID starting with ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f not found: ID does not exist" containerID="ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.285314 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f"} err="failed to get container status \"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f\": rpc error: code = NotFound desc = could not find container \"ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f\": container with ID starting with ee9da00f862508f7b2cc7c3642c864ad4b9275cfd1d201dd2edfc7e753e56a3f not found: ID does not exist" Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.298064 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:59:45 crc kubenswrapper[4985]: I0127 08:59:45.305558 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jn7wk"] Jan 27 08:59:46 crc kubenswrapper[4985]: I0127 08:59:46.459279 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" path="/var/lib/kubelet/pods/656a2bff-5cf1-426b-b879-a6c0cc1f4cb2/volumes" Jan 27 08:59:49 crc kubenswrapper[4985]: I0127 08:59:49.745831 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:49 crc kubenswrapper[4985]: I0127 08:59:49.785334 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kxpvj" Jan 27 08:59:50 crc kubenswrapper[4985]: I0127 08:59:50.351936 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tbrf" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.157723 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9"] Jan 27 09:00:00 crc kubenswrapper[4985]: E0127 09:00:00.158906 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" containerName="registry" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.158926 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" containerName="registry" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.159103 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="656a2bff-5cf1-426b-b879-a6c0cc1f4cb2" containerName="registry" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.159753 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.162780 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.168146 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.168786 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9"] Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.308697 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.308952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.309015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4zns\" (UniqueName: \"kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.410136 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.410240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.410274 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4zns\" (UniqueName: \"kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.411677 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.424493 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.428905 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4zns\" (UniqueName: \"kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns\") pod \"collect-profiles-29491740-bmqp9\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.481624 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:00 crc kubenswrapper[4985]: I0127 09:00:00.691968 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9"] Jan 27 09:00:01 crc kubenswrapper[4985]: I0127 09:00:01.383562 4985 generic.go:334] "Generic (PLEG): container finished" podID="658aa687-e743-496d-8f4e-ea241c303e72" containerID="afb3aadd57d9e1949ca538848d23977b32a10fc783dcc295f34050f038ff7b2e" exitCode=0 Jan 27 09:00:01 crc kubenswrapper[4985]: I0127 09:00:01.383647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" event={"ID":"658aa687-e743-496d-8f4e-ea241c303e72","Type":"ContainerDied","Data":"afb3aadd57d9e1949ca538848d23977b32a10fc783dcc295f34050f038ff7b2e"} Jan 27 09:00:01 crc kubenswrapper[4985]: I0127 09:00:01.383997 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" event={"ID":"658aa687-e743-496d-8f4e-ea241c303e72","Type":"ContainerStarted","Data":"fea90d54848492b39eac9a045aa10fa63171a8a7950c26ef03581f65780c8def"} Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.678768 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.743490 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume\") pod \"658aa687-e743-496d-8f4e-ea241c303e72\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.743674 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume\") pod \"658aa687-e743-496d-8f4e-ea241c303e72\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.743711 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4zns\" (UniqueName: \"kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns\") pod \"658aa687-e743-496d-8f4e-ea241c303e72\" (UID: \"658aa687-e743-496d-8f4e-ea241c303e72\") " Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.744601 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume" (OuterVolumeSpecName: "config-volume") pod "658aa687-e743-496d-8f4e-ea241c303e72" (UID: "658aa687-e743-496d-8f4e-ea241c303e72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.750918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "658aa687-e743-496d-8f4e-ea241c303e72" (UID: "658aa687-e743-496d-8f4e-ea241c303e72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.751773 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns" (OuterVolumeSpecName: "kube-api-access-b4zns") pod "658aa687-e743-496d-8f4e-ea241c303e72" (UID: "658aa687-e743-496d-8f4e-ea241c303e72"). InnerVolumeSpecName "kube-api-access-b4zns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.845794 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/658aa687-e743-496d-8f4e-ea241c303e72-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.845839 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4zns\" (UniqueName: \"kubernetes.io/projected/658aa687-e743-496d-8f4e-ea241c303e72-kube-api-access-b4zns\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:02 crc kubenswrapper[4985]: I0127 09:00:02.845852 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658aa687-e743-496d-8f4e-ea241c303e72-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:03 crc kubenswrapper[4985]: I0127 09:00:03.395091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" event={"ID":"658aa687-e743-496d-8f4e-ea241c303e72","Type":"ContainerDied","Data":"fea90d54848492b39eac9a045aa10fa63171a8a7950c26ef03581f65780c8def"} Jan 27 09:00:03 crc kubenswrapper[4985]: I0127 09:00:03.395162 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea90d54848492b39eac9a045aa10fa63171a8a7950c26ef03581f65780c8def" Jan 27 09:00:03 crc kubenswrapper[4985]: I0127 09:00:03.395695 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9" Jan 27 09:00:11 crc kubenswrapper[4985]: I0127 09:00:11.828630 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:00:11 crc kubenswrapper[4985]: I0127 09:00:11.829423 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:00:41 crc kubenswrapper[4985]: I0127 09:00:41.828045 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:00:41 crc kubenswrapper[4985]: I0127 09:00:41.828917 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:00:41 crc kubenswrapper[4985]: I0127 09:00:41.828970 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:00:41 crc kubenswrapper[4985]: I0127 09:00:41.829840 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:00:41 crc kubenswrapper[4985]: I0127 09:00:41.829948 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f" gracePeriod=600 Jan 27 09:00:42 crc kubenswrapper[4985]: I0127 09:00:42.689295 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f" exitCode=0 Jan 27 09:00:42 crc kubenswrapper[4985]: I0127 09:00:42.689351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f"} Jan 27 09:00:42 crc kubenswrapper[4985]: I0127 09:00:42.690726 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166"} Jan 27 09:00:42 crc kubenswrapper[4985]: I0127 09:00:42.690773 4985 scope.go:117] "RemoveContainer" containerID="4d6574971ded4a24364f08396c4f0523ddfd74f76d0bb386072ffb86c812d7da" Jan 27 09:03:11 crc kubenswrapper[4985]: I0127 09:03:11.828819 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:03:11 crc kubenswrapper[4985]: I0127 09:03:11.829440 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:03:41 crc kubenswrapper[4985]: I0127 09:03:41.828422 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:03:41 crc kubenswrapper[4985]: I0127 09:03:41.829674 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.828129 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.828750 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.828797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.829426 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.829485 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166" gracePeriod=600 Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.987749 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166" exitCode=0 Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.987824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166"} Jan 27 09:04:11 crc kubenswrapper[4985]: I0127 09:04:11.988369 4985 scope.go:117] "RemoveContainer" containerID="57cfb1638f01041a813f2c95dc8e63d84098f3d36598d8ee4094d6434a454c0f" Jan 27 09:04:12 crc kubenswrapper[4985]: I0127 09:04:12.996699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302"} Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.172268 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8"] Jan 27 09:05:05 crc kubenswrapper[4985]: E0127 09:05:05.173103 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658aa687-e743-496d-8f4e-ea241c303e72" containerName="collect-profiles" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.173119 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="658aa687-e743-496d-8f4e-ea241c303e72" containerName="collect-profiles" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.173265 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="658aa687-e743-496d-8f4e-ea241c303e72" containerName="collect-profiles" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.173726 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.175388 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.175435 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.175809 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lrcq8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.181904 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.196202 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gzn5m"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.197029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzn5m" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.198614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2w5zv" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.205421 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t2dhh"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.206213 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.208708 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z7mzq" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.210680 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9km22\" (UniqueName: \"kubernetes.io/projected/481458b3-b470-4d73-b0bf-9053f8605b8a-kube-api-access-9km22\") pod \"cert-manager-webhook-687f57d79b-t2dhh\" (UID: \"481458b3-b470-4d73-b0bf-9053f8605b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.210748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jjk\" (UniqueName: \"kubernetes.io/projected/382caf51-f3a3-47a2-adf8-e8d2387d245a-kube-api-access-s4jjk\") pod \"cert-manager-cainjector-cf98fcc89-h9fn8\" (UID: \"382caf51-f3a3-47a2-adf8-e8d2387d245a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.210834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsqk\" (UniqueName: \"kubernetes.io/projected/1b8dc334-24d3-4a99-8130-e07eb6d70ea5-kube-api-access-zvsqk\") pod \"cert-manager-858654f9db-gzn5m\" (UID: \"1b8dc334-24d3-4a99-8130-e07eb6d70ea5\") " pod="cert-manager/cert-manager-858654f9db-gzn5m" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.220444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t2dhh"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.236654 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzn5m"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.311645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsqk\" (UniqueName: \"kubernetes.io/projected/1b8dc334-24d3-4a99-8130-e07eb6d70ea5-kube-api-access-zvsqk\") pod \"cert-manager-858654f9db-gzn5m\" (UID: \"1b8dc334-24d3-4a99-8130-e07eb6d70ea5\") " pod="cert-manager/cert-manager-858654f9db-gzn5m" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.311704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9km22\" (UniqueName: \"kubernetes.io/projected/481458b3-b470-4d73-b0bf-9053f8605b8a-kube-api-access-9km22\") pod \"cert-manager-webhook-687f57d79b-t2dhh\" (UID: \"481458b3-b470-4d73-b0bf-9053f8605b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.311774 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jjk\" (UniqueName: \"kubernetes.io/projected/382caf51-f3a3-47a2-adf8-e8d2387d245a-kube-api-access-s4jjk\") pod \"cert-manager-cainjector-cf98fcc89-h9fn8\" (UID: \"382caf51-f3a3-47a2-adf8-e8d2387d245a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.330597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsqk\" (UniqueName: \"kubernetes.io/projected/1b8dc334-24d3-4a99-8130-e07eb6d70ea5-kube-api-access-zvsqk\") pod \"cert-manager-858654f9db-gzn5m\" (UID: \"1b8dc334-24d3-4a99-8130-e07eb6d70ea5\") " pod="cert-manager/cert-manager-858654f9db-gzn5m" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.330822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9km22\" (UniqueName: \"kubernetes.io/projected/481458b3-b470-4d73-b0bf-9053f8605b8a-kube-api-access-9km22\") pod \"cert-manager-webhook-687f57d79b-t2dhh\" (UID: \"481458b3-b470-4d73-b0bf-9053f8605b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.331324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jjk\" (UniqueName: \"kubernetes.io/projected/382caf51-f3a3-47a2-adf8-e8d2387d245a-kube-api-access-s4jjk\") pod \"cert-manager-cainjector-cf98fcc89-h9fn8\" (UID: \"382caf51-f3a3-47a2-adf8-e8d2387d245a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.496062 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.519968 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzn5m" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.529572 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.756435 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8"] Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.768464 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.811699 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t2dhh"] Jan 27 09:05:05 crc kubenswrapper[4985]: W0127 09:05:05.815131 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod481458b3_b470_4d73_b0bf_9053f8605b8a.slice/crio-41f86916889e922fb49665d1e71fe062de4ba32ccea550e2682f9a6828300f93 WatchSource:0}: Error finding container 41f86916889e922fb49665d1e71fe062de4ba32ccea550e2682f9a6828300f93: Status 404 returned error can't find the container with id 41f86916889e922fb49665d1e71fe062de4ba32ccea550e2682f9a6828300f93 Jan 27 09:05:05 crc kubenswrapper[4985]: I0127 09:05:05.868466 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzn5m"] Jan 27 09:05:05 crc kubenswrapper[4985]: W0127 09:05:05.874325 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8dc334_24d3_4a99_8130_e07eb6d70ea5.slice/crio-c3d14de03aa9a0dc654d0267617c1e1178630e7fd82d6b7d1c55b2d523387c93 WatchSource:0}: Error finding container c3d14de03aa9a0dc654d0267617c1e1178630e7fd82d6b7d1c55b2d523387c93: Status 404 returned error can't find the container with id c3d14de03aa9a0dc654d0267617c1e1178630e7fd82d6b7d1c55b2d523387c93 Jan 27 09:05:06 crc kubenswrapper[4985]: I0127 09:05:06.272913 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" event={"ID":"481458b3-b470-4d73-b0bf-9053f8605b8a","Type":"ContainerStarted","Data":"41f86916889e922fb49665d1e71fe062de4ba32ccea550e2682f9a6828300f93"} Jan 27 09:05:06 crc kubenswrapper[4985]: I0127 09:05:06.274161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzn5m" event={"ID":"1b8dc334-24d3-4a99-8130-e07eb6d70ea5","Type":"ContainerStarted","Data":"c3d14de03aa9a0dc654d0267617c1e1178630e7fd82d6b7d1c55b2d523387c93"} Jan 27 09:05:06 crc kubenswrapper[4985]: I0127 09:05:06.274991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" event={"ID":"382caf51-f3a3-47a2-adf8-e8d2387d245a","Type":"ContainerStarted","Data":"d7821a81761319e4ea8ee06c3387b0c1d867e0eb20f73f7c33e425dcfe927540"} Jan 27 09:05:12 crc kubenswrapper[4985]: I0127 09:05:12.310915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" event={"ID":"382caf51-f3a3-47a2-adf8-e8d2387d245a","Type":"ContainerStarted","Data":"da1f899aa9018c5a2e5ddff5ffde39801f197cbe2a51056f4e25b5d76b88f052"} Jan 27 09:05:12 crc kubenswrapper[4985]: I0127 09:05:12.312222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" event={"ID":"481458b3-b470-4d73-b0bf-9053f8605b8a","Type":"ContainerStarted","Data":"15b5c9c7c393324aabc328c5c28419be4222a17175396eebcf6a6d4161c6c31a"} Jan 27 09:05:12 crc kubenswrapper[4985]: I0127 09:05:12.312974 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:12 crc kubenswrapper[4985]: I0127 09:05:12.335098 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h9fn8" podStartSLOduration=1.6800784229999999 podStartE2EDuration="7.335046799s" podCreationTimestamp="2026-01-27 09:05:05 +0000 UTC" firstStartedPulling="2026-01-27 09:05:05.768051433 +0000 UTC m=+690.059146274" lastFinishedPulling="2026-01-27 09:05:11.423019809 +0000 UTC m=+695.714114650" observedRunningTime="2026-01-27 09:05:12.328720084 +0000 UTC m=+696.619814945" watchObservedRunningTime="2026-01-27 09:05:12.335046799 +0000 UTC m=+696.626141630" Jan 27 09:05:12 crc kubenswrapper[4985]: I0127 09:05:12.355683 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" podStartSLOduration=1.678854958 podStartE2EDuration="7.355649807s" podCreationTimestamp="2026-01-27 09:05:05 +0000 UTC" firstStartedPulling="2026-01-27 09:05:05.818852265 +0000 UTC m=+690.109947106" lastFinishedPulling="2026-01-27 09:05:11.495647114 +0000 UTC m=+695.786741955" observedRunningTime="2026-01-27 09:05:12.352830899 +0000 UTC m=+696.643925750" watchObservedRunningTime="2026-01-27 09:05:12.355649807 +0000 UTC m=+696.646744668" Jan 27 09:05:13 crc kubenswrapper[4985]: I0127 09:05:13.321932 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzn5m" event={"ID":"1b8dc334-24d3-4a99-8130-e07eb6d70ea5","Type":"ContainerStarted","Data":"a832f7d74ebd398eb0371a315b8f38e96acf6cb8cf019b56278c21d5858cb6e7"} Jan 27 09:05:13 crc kubenswrapper[4985]: I0127 09:05:13.341734 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gzn5m" podStartSLOduration=1.524642582 podStartE2EDuration="8.341707379s" podCreationTimestamp="2026-01-27 09:05:05 +0000 UTC" firstStartedPulling="2026-01-27 09:05:05.876272399 +0000 UTC m=+690.167367240" lastFinishedPulling="2026-01-27 09:05:12.693337196 +0000 UTC m=+696.984432037" observedRunningTime="2026-01-27 09:05:13.337350339 +0000 UTC m=+697.628445190" watchObservedRunningTime="2026-01-27 09:05:13.341707379 +0000 UTC m=+697.632802220" Jan 27 09:05:20 crc kubenswrapper[4985]: I0127 09:05:20.533062 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-t2dhh" Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.451376 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kqdf4"] Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.452937 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-controller" containerID="cri-o://1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453020 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="northd" containerID="cri-o://4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453070 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="sbdb" containerID="cri-o://2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453136 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453110 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="nbdb" containerID="cri-o://4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453147 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-acl-logging" containerID="cri-o://f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.453075 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-node" containerID="cri-o://740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: I0127 09:05:35.490095 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" containerID="cri-o://7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" gracePeriod=30 Jan 27 09:05:35 crc kubenswrapper[4985]: E0127 09:05:35.864396 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc is running failed: container process not found" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 27 09:05:35 crc kubenswrapper[4985]: E0127 09:05:35.865050 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc is running failed: container process not found" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 27 09:05:35 crc kubenswrapper[4985]: E0127 09:05:35.865698 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc is running failed: container process not found" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 27 09:05:35 crc kubenswrapper[4985]: E0127 09:05:35.866417 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.412367 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/3.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.416299 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovn-acl-logging/0.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.417429 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovn-controller/0.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.418261 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.465943 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466022 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqq2\" (UniqueName: \"kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466052 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466317 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466380 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466448 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466623 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466480 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466695 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466922 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466990 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467018 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467059 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467100 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467179 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log\") pod \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\" (UID: \"c6239c91-d93d-4db8-ac4b-d44ddbc7c100\") " Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467558 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovnkube-controller/3.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467648 4985 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467669 4985 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467685 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467727 4985 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.466858 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.467768 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log" (OuterVolumeSpecName: "node-log") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468342 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468372 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468430 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468452 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.468750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket" (OuterVolumeSpecName: "log-socket") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.469063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.469261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.469286 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.469525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash" (OuterVolumeSpecName: "host-slash") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.472219 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovn-acl-logging/0.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.473183 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kqdf4_c6239c91-d93d-4db8-ac4b-d44ddbc7c100/ovn-controller/0.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.473910 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474146 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474173 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474398 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474094 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474452 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474470 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474482 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474493 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" exitCode=0 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474503 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" exitCode=143 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475029 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475177 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475625 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475755 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475875 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.475977 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.476095 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.476211 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.476435 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.476624 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.476740 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.477093 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.477325 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.477652 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.477885 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.478145 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.478349 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.478599 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.479067 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.479194 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.479315 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.479452 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.479854 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480014 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480145 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480269 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480391 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480542 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480681 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.480978 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481112 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481234 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481346 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.474537 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" exitCode=143 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kqdf4" event={"ID":"c6239c91-d93d-4db8-ac4b-d44ddbc7c100","Type":"ContainerDied","Data":"4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481733 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481777 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481786 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481794 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481801 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481808 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481815 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481823 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481830 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.481837 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.482235 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2" (OuterVolumeSpecName: "kube-api-access-rfqq2") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "kube-api-access-rfqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.488849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.501823 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/2.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.512154 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/1.log" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.512238 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ddda14a-730e-4c1f-afea-07c95221ba04" containerID="2c6cceff4e44e436e1673ebf66431dd57c0d8f5b1ddc8c7a757ef3148da0526a" exitCode=2 Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.512292 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerDied","Data":"2c6cceff4e44e436e1673ebf66431dd57c0d8f5b1ddc8c7a757ef3148da0526a"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.512341 4985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0"} Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.513095 4985 scope.go:117] "RemoveContainer" containerID="2c6cceff4e44e436e1673ebf66431dd57c0d8f5b1ddc8c7a757ef3148da0526a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.513374 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-trnnq"] Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.513466 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cqdrf_openshift-multus(1ddda14a-730e-4c1f-afea-07c95221ba04)\"" pod="openshift-multus/multus-cqdrf" podUID="1ddda14a-730e-4c1f-afea-07c95221ba04" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.513991 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="northd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.514129 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="northd" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.514292 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="nbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.514410 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="nbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.514540 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kubecfg-setup" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.514660 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kubecfg-setup" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.514760 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.514861 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.514982 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.515100 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.515249 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.515394 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.515509 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.515668 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.515790 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-acl-logging" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.515893 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-acl-logging" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.516000 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.516102 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.516203 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="sbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.516306 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="sbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.516484 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-node" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.516713 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-node" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.516842 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.516973 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.517024 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c6239c91-d93d-4db8-ac4b-d44ddbc7c100" (UID: "c6239c91-d93d-4db8-ac4b-d44ddbc7c100"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.517755 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-node" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518578 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518608 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518621 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="sbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518632 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518643 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518657 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518678 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518745 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovn-acl-logging" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518771 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="nbdb" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.518809 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="northd" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.519053 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.519069 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.519223 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" containerName="ovnkube-controller" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.522118 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.523835 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.555207 4985 scope.go:117] "RemoveContainer" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572282 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-slash\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-script-lib\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-netns\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572396 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-log-socket\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572422 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-bin\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-env-overrides\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-etc-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572637 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-kubelet\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572663 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ks4\" (UniqueName: \"kubernetes.io/projected/26abbe36-eaa4-44a9-b782-a65e4d266519-kube-api-access-54ks4\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572711 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-systemd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572797 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-ovn\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-var-lib-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572884 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-netd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572909 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-systemd-units\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572931 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-node-log\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572959 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26abbe36-eaa4-44a9-b782-a65e4d266519-ovn-node-metrics-cert\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.572984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-config\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573042 4985 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573056 4985 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573070 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqq2\" (UniqueName: \"kubernetes.io/projected/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-kube-api-access-rfqq2\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573083 4985 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573095 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573107 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573119 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573131 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573142 4985 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573154 4985 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573165 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573177 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573190 4985 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573222 4985 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573256 4985 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.573271 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6239c91-d93d-4db8-ac4b-d44ddbc7c100-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.601594 4985 scope.go:117] "RemoveContainer" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.617886 4985 scope.go:117] "RemoveContainer" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.639690 4985 scope.go:117] "RemoveContainer" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.653082 4985 scope.go:117] "RemoveContainer" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.666423 4985 scope.go:117] "RemoveContainer" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674736 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-var-lib-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674783 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-netd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674856 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-systemd-units\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674881 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-node-log\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674908 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26abbe36-eaa4-44a9-b782-a65e4d266519-ovn-node-metrics-cert\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674929 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-config\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-slash\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-script-lib\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-netns\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-log-socket\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-bin\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675121 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-env-overrides\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-etc-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-kubelet\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ks4\" (UniqueName: \"kubernetes.io/projected/26abbe36-eaa4-44a9-b782-a65e4d266519-kube-api-access-54ks4\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675256 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675286 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675307 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-systemd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-ovn\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675500 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-ovn\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-slash\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-config\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-systemd-units\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-node-log\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675749 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-log-socket\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-netd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.674880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-var-lib-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675846 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-netns\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-kubelet\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-run-systemd\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.675948 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-run-ovn-kubernetes\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.676056 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-host-cni-bin\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.676092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-ovnkube-script-lib\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.676412 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26abbe36-eaa4-44a9-b782-a65e4d266519-etc-openvswitch\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.676591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26abbe36-eaa4-44a9-b782-a65e4d266519-env-overrides\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.678420 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.684060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26abbe36-eaa4-44a9-b782-a65e4d266519-ovn-node-metrics-cert\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.694308 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ks4\" (UniqueName: \"kubernetes.io/projected/26abbe36-eaa4-44a9-b782-a65e4d266519-kube-api-access-54ks4\") pod \"ovnkube-node-trnnq\" (UID: \"26abbe36-eaa4-44a9-b782-a65e4d266519\") " pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.706099 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.721205 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.727763 4985 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-kqdf4_openshift-ovn-kubernetes_c6239c91-d93d-4db8-ac4b-d44ddbc7c100_0 in pod sandbox 4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a from index: no such id: '1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440'" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.727818 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.727840 4985 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-kqdf4_openshift-ovn-kubernetes_c6239c91-d93d-4db8-ac4b-d44ddbc7c100_0 in pod sandbox 4d0ba50e62341f4188f65f227813035f0416e7c9526a0ad88085759e9fa6360a from index: no such id: '1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440'" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.727869 4985 scope.go:117] "RemoveContainer" containerID="611086eedd8a7318bff583bd65a81b3d4dd59b8be78744d6b5280bcbf9bd74b0" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.728410 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.728457 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} err="failed to get container status \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.728479 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.728756 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.728779 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} err="failed to get container status \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.728796 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.729104 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": container with ID starting with bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181 not found: ID does not exist" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.729153 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} err="failed to get container status \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": rpc error: code = NotFound desc = could not find container \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": container with ID starting with bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.729194 4985 scope.go:117] "RemoveContainer" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.729501 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": container with ID starting with 2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a not found: ID does not exist" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.729643 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} err="failed to get container status \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": rpc error: code = NotFound desc = could not find container \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": container with ID starting with 2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.729701 4985 scope.go:117] "RemoveContainer" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.730982 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": container with ID starting with 4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd not found: ID does not exist" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731010 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} err="failed to get container status \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": rpc error: code = NotFound desc = could not find container \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": container with ID starting with 4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731025 4985 scope.go:117] "RemoveContainer" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.731375 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": container with ID starting with 4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c not found: ID does not exist" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731402 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} err="failed to get container status \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": rpc error: code = NotFound desc = could not find container \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": container with ID starting with 4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731421 4985 scope.go:117] "RemoveContainer" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.731657 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": container with ID starting with e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6 not found: ID does not exist" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731738 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} err="failed to get container status \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": rpc error: code = NotFound desc = could not find container \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": container with ID starting with e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.731827 4985 scope.go:117] "RemoveContainer" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.732227 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": container with ID starting with 740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e not found: ID does not exist" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732252 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} err="failed to get container status \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": rpc error: code = NotFound desc = could not find container \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": container with ID starting with 740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732269 4985 scope.go:117] "RemoveContainer" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.732495 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": container with ID starting with f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff not found: ID does not exist" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732536 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} err="failed to get container status \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": rpc error: code = NotFound desc = could not find container \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": container with ID starting with f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732551 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: E0127 09:05:36.732820 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": container with ID starting with 1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440 not found: ID does not exist" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732849 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} err="failed to get container status \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": rpc error: code = NotFound desc = could not find container \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": container with ID starting with 1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.732864 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.733259 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} err="failed to get container status \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.733351 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.733792 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} err="failed to get container status \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.734505 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.734886 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} err="failed to get container status \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": rpc error: code = NotFound desc = could not find container \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": container with ID starting with bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.734967 4985 scope.go:117] "RemoveContainer" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.735252 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} err="failed to get container status \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": rpc error: code = NotFound desc = could not find container \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": container with ID starting with 2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.735347 4985 scope.go:117] "RemoveContainer" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.735678 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} err="failed to get container status \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": rpc error: code = NotFound desc = could not find container \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": container with ID starting with 4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.735707 4985 scope.go:117] "RemoveContainer" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.735964 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} err="failed to get container status \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": rpc error: code = NotFound desc = could not find container \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": container with ID starting with 4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.736071 4985 scope.go:117] "RemoveContainer" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.736478 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} err="failed to get container status \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": rpc error: code = NotFound desc = could not find container \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": container with ID starting with e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.736500 4985 scope.go:117] "RemoveContainer" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.736861 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} err="failed to get container status \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": rpc error: code = NotFound desc = could not find container \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": container with ID starting with 740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.736894 4985 scope.go:117] "RemoveContainer" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737143 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} err="failed to get container status \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": rpc error: code = NotFound desc = could not find container \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": container with ID starting with f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737221 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737497 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} err="failed to get container status \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": rpc error: code = NotFound desc = could not find container \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": container with ID starting with 1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737563 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737916 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} err="failed to get container status \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.737942 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738217 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} err="failed to get container status \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738239 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738467 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} err="failed to get container status \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": rpc error: code = NotFound desc = could not find container \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": container with ID starting with bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738587 4985 scope.go:117] "RemoveContainer" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738878 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} err="failed to get container status \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": rpc error: code = NotFound desc = could not find container \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": container with ID starting with 2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.738909 4985 scope.go:117] "RemoveContainer" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.739138 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} err="failed to get container status \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": rpc error: code = NotFound desc = could not find container \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": container with ID starting with 4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.739214 4985 scope.go:117] "RemoveContainer" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.739534 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} err="failed to get container status \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": rpc error: code = NotFound desc = could not find container \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": container with ID starting with 4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.739567 4985 scope.go:117] "RemoveContainer" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.739943 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} err="failed to get container status \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": rpc error: code = NotFound desc = could not find container \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": container with ID starting with e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740013 4985 scope.go:117] "RemoveContainer" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740351 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} err="failed to get container status \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": rpc error: code = NotFound desc = could not find container \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": container with ID starting with 740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740376 4985 scope.go:117] "RemoveContainer" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740636 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} err="failed to get container status \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": rpc error: code = NotFound desc = could not find container \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": container with ID starting with f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740655 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.740934 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} err="failed to get container status \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": rpc error: code = NotFound desc = could not find container \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": container with ID starting with 1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741035 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741286 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} err="failed to get container status \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741308 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741564 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} err="failed to get container status \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741635 4985 scope.go:117] "RemoveContainer" containerID="bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.741964 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181"} err="failed to get container status \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": rpc error: code = NotFound desc = could not find container \"bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181\": container with ID starting with bef202d5ab4adb68897c4c8da62be022e95cd949fa91b3f80e3df449f0d72181 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.742004 4985 scope.go:117] "RemoveContainer" containerID="2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.742380 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a"} err="failed to get container status \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": rpc error: code = NotFound desc = could not find container \"2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a\": container with ID starting with 2a1f1e90a882fa697c344cb87db77d2f0b02b33920f6555ca549ad51b1fb5c8a not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.742457 4985 scope.go:117] "RemoveContainer" containerID="4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.742754 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd"} err="failed to get container status \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": rpc error: code = NotFound desc = could not find container \"4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd\": container with ID starting with 4b176f2daca0d26fc9e3d30a8547b5526c90221ce2932c01c876988e8606b4dd not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.742795 4985 scope.go:117] "RemoveContainer" containerID="4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743065 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c"} err="failed to get container status \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": rpc error: code = NotFound desc = could not find container \"4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c\": container with ID starting with 4dbb8096184a88c573060b14b7b5cac9a5abdc5af97ad7efdc50ba216e77522c not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743166 4985 scope.go:117] "RemoveContainer" containerID="e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743501 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6"} err="failed to get container status \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": rpc error: code = NotFound desc = could not find container \"e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6\": container with ID starting with e53f9d50e5099f9c55a0bf8a63c31bf7687c7b9df334251189af1c4c02df5dd6 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743567 4985 scope.go:117] "RemoveContainer" containerID="740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743834 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e"} err="failed to get container status \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": rpc error: code = NotFound desc = could not find container \"740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e\": container with ID starting with 740b7fd6437a51492236c929096639016d34d684d2742404a428355b3d9ee51e not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.743902 4985 scope.go:117] "RemoveContainer" containerID="f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744164 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff"} err="failed to get container status \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": rpc error: code = NotFound desc = could not find container \"f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff\": container with ID starting with f164f612d1435a7f2a6677cb810bc836a44faa77952909e4290e31806ae631ff not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744190 4985 scope.go:117] "RemoveContainer" containerID="1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744553 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440"} err="failed to get container status \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": rpc error: code = NotFound desc = could not find container \"1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440\": container with ID starting with 1bfc598056438a398f1ea20e047af950aa776a7e43bdb694728c1c138dff6440 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744636 4985 scope.go:117] "RemoveContainer" containerID="8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744901 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8"} err="failed to get container status \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": rpc error: code = NotFound desc = could not find container \"8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8\": container with ID starting with 8c1c716f751dd0e139554e4bef4fc33694f4afa5482605f651326882af8d99c8 not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.744922 4985 scope.go:117] "RemoveContainer" containerID="7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.745228 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc"} err="failed to get container status \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": rpc error: code = NotFound desc = could not find container \"7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc\": container with ID starting with 7571f3c4f07bff9f8b363bb8ec376bdfc098c2d355269649ea21bae387b5efcc not found: ID does not exist" Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.832751 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kqdf4"] Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.842611 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kqdf4"] Jan 27 09:05:36 crc kubenswrapper[4985]: I0127 09:05:36.849493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:36 crc kubenswrapper[4985]: W0127 09:05:36.872948 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26abbe36_eaa4_44a9_b782_a65e4d266519.slice/crio-25963f580a8b12a141d72c4548470d01568a8bf37a8389b8f8bce71f2a516915 WatchSource:0}: Error finding container 25963f580a8b12a141d72c4548470d01568a8bf37a8389b8f8bce71f2a516915: Status 404 returned error can't find the container with id 25963f580a8b12a141d72c4548470d01568a8bf37a8389b8f8bce71f2a516915 Jan 27 09:05:37 crc kubenswrapper[4985]: I0127 09:05:37.518943 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/2.log" Jan 27 09:05:37 crc kubenswrapper[4985]: I0127 09:05:37.521975 4985 generic.go:334] "Generic (PLEG): container finished" podID="26abbe36-eaa4-44a9-b782-a65e4d266519" containerID="af1541c3e01aeec950a2af28ecfd3886463c0127b30eabb88d6a7e02fa21c725" exitCode=0 Jan 27 09:05:37 crc kubenswrapper[4985]: I0127 09:05:37.522048 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerDied","Data":"af1541c3e01aeec950a2af28ecfd3886463c0127b30eabb88d6a7e02fa21c725"} Jan 27 09:05:37 crc kubenswrapper[4985]: I0127 09:05:37.522100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"25963f580a8b12a141d72c4548470d01568a8bf37a8389b8f8bce71f2a516915"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.459089 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6239c91-d93d-4db8-ac4b-d44ddbc7c100" path="/var/lib/kubelet/pods/c6239c91-d93d-4db8-ac4b-d44ddbc7c100/volumes" Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"47a6acb08dd363b4e3e9a9fae89d92c49c063c4b55170031379488e2b53b95ef"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531308 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"fc6204dfc3dfeceab7dbd51ad30ae8e65d6e723379fb7ca519aac5d3da0fa77a"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"bb4fa91042fa3b7eb51684c852993508651ff138ded89f97c6d1866c11ba88b9"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531329 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"5fa04734f291d76fc41274b893d204d535b97a0683f37ab4016b758470daf827"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"b0929652aec248a3c82a291fb39f42e4ff7e82f330962563932cb77e0d860dfd"} Jan 27 09:05:38 crc kubenswrapper[4985]: I0127 09:05:38.531349 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"73a47a7474ca7def72b249f62621f16889cda18a5c2a3c5763e9e5e3b56e3e08"} Jan 27 09:05:41 crc kubenswrapper[4985]: I0127 09:05:41.558582 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"eb313990af7ff4f4e73700be0aa8a0ae27f70aabafcf3795b114c5445e35349f"} Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.573720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" event={"ID":"26abbe36-eaa4-44a9-b782-a65e4d266519","Type":"ContainerStarted","Data":"9c2df1c459e83441ddd79f9b4cda7b8f1cecdbfd3059f2c1dc09ad13980a222d"} Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.574919 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.574945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.574962 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.603866 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.610451 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:05:43 crc kubenswrapper[4985]: I0127 09:05:43.611304 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" podStartSLOduration=7.611277787 podStartE2EDuration="7.611277787s" podCreationTimestamp="2026-01-27 09:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:43.611159383 +0000 UTC m=+727.902254234" watchObservedRunningTime="2026-01-27 09:05:43.611277787 +0000 UTC m=+727.902372638" Jan 27 09:05:50 crc kubenswrapper[4985]: I0127 09:05:50.451806 4985 scope.go:117] "RemoveContainer" containerID="2c6cceff4e44e436e1673ebf66431dd57c0d8f5b1ddc8c7a757ef3148da0526a" Jan 27 09:05:51 crc kubenswrapper[4985]: I0127 09:05:51.627935 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqdrf_1ddda14a-730e-4c1f-afea-07c95221ba04/kube-multus/2.log" Jan 27 09:05:51 crc kubenswrapper[4985]: I0127 09:05:51.628257 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqdrf" event={"ID":"1ddda14a-730e-4c1f-afea-07c95221ba04","Type":"ContainerStarted","Data":"18c2cdd7f53eb988ec878d4afb8e9b29eadb90b3130028e529635db3be8b2db0"} Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.729140 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw"] Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.730782 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.734380 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.740211 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw"] Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.871929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-trnnq" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.897020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.897085 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzc5\" (UniqueName: \"kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.897133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.999266 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:06 crc kubenswrapper[4985]: I0127 09:06:06.999875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.000036 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzc5\" (UniqueName: \"kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.000247 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:06.999886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.021041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzc5\" (UniqueName: \"kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.052141 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.259979 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw"] Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.715344 4985 generic.go:334] "Generic (PLEG): container finished" podID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerID="b8ec8bdbfc63d6371732dc537979df0108034ee88a8a22782313f03ada89dab8" exitCode=0 Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.715478 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" event={"ID":"efbd0472-2548-4ceb-8f40-a8586eb223e2","Type":"ContainerDied","Data":"b8ec8bdbfc63d6371732dc537979df0108034ee88a8a22782313f03ada89dab8"} Jan 27 09:06:07 crc kubenswrapper[4985]: I0127 09:06:07.715577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" event={"ID":"efbd0472-2548-4ceb-8f40-a8586eb223e2","Type":"ContainerStarted","Data":"0cf2aa3633547515d88577c5270ba4a88efd16aad1e6d4df2cb3570cf024cf89"} Jan 27 09:06:09 crc kubenswrapper[4985]: I0127 09:06:09.730758 4985 generic.go:334] "Generic (PLEG): container finished" podID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerID="bc37038e94d0a7428e07088f29fe6fc29c642d23f91a284798b3398ba4379dbe" exitCode=0 Jan 27 09:06:09 crc kubenswrapper[4985]: I0127 09:06:09.730872 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" event={"ID":"efbd0472-2548-4ceb-8f40-a8586eb223e2","Type":"ContainerDied","Data":"bc37038e94d0a7428e07088f29fe6fc29c642d23f91a284798b3398ba4379dbe"} Jan 27 09:06:10 crc kubenswrapper[4985]: I0127 09:06:10.743425 4985 generic.go:334] "Generic (PLEG): container finished" podID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerID="99b25baa1d05afbbcf3f4a17afe088a6b089ae1148ba6bd8cfb791d4c0fe94e6" exitCode=0 Jan 27 09:06:10 crc kubenswrapper[4985]: I0127 09:06:10.743503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" event={"ID":"efbd0472-2548-4ceb-8f40-a8586eb223e2","Type":"ContainerDied","Data":"99b25baa1d05afbbcf3f4a17afe088a6b089ae1148ba6bd8cfb791d4c0fe94e6"} Jan 27 09:06:11 crc kubenswrapper[4985]: I0127 09:06:11.978141 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.072815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzzc5\" (UniqueName: \"kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5\") pod \"efbd0472-2548-4ceb-8f40-a8586eb223e2\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.072956 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util\") pod \"efbd0472-2548-4ceb-8f40-a8586eb223e2\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.073049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle\") pod \"efbd0472-2548-4ceb-8f40-a8586eb223e2\" (UID: \"efbd0472-2548-4ceb-8f40-a8586eb223e2\") " Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.074282 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle" (OuterVolumeSpecName: "bundle") pod "efbd0472-2548-4ceb-8f40-a8586eb223e2" (UID: "efbd0472-2548-4ceb-8f40-a8586eb223e2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.087935 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5" (OuterVolumeSpecName: "kube-api-access-vzzc5") pod "efbd0472-2548-4ceb-8f40-a8586eb223e2" (UID: "efbd0472-2548-4ceb-8f40-a8586eb223e2"). InnerVolumeSpecName "kube-api-access-vzzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.089967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util" (OuterVolumeSpecName: "util") pod "efbd0472-2548-4ceb-8f40-a8586eb223e2" (UID: "efbd0472-2548-4ceb-8f40-a8586eb223e2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.176011 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.176589 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzzc5\" (UniqueName: \"kubernetes.io/projected/efbd0472-2548-4ceb-8f40-a8586eb223e2-kube-api-access-vzzc5\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.176611 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efbd0472-2548-4ceb-8f40-a8586eb223e2-util\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.759263 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.759247 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw" event={"ID":"efbd0472-2548-4ceb-8f40-a8586eb223e2","Type":"ContainerDied","Data":"0cf2aa3633547515d88577c5270ba4a88efd16aad1e6d4df2cb3570cf024cf89"} Jan 27 09:06:12 crc kubenswrapper[4985]: I0127 09:06:12.759334 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cf2aa3633547515d88577c5270ba4a88efd16aad1e6d4df2cb3570cf024cf89" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.239346 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cxv4p"] Jan 27 09:06:14 crc kubenswrapper[4985]: E0127 09:06:14.239583 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="util" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.239597 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="util" Jan 27 09:06:14 crc kubenswrapper[4985]: E0127 09:06:14.239607 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="pull" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.239612 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="pull" Jan 27 09:06:14 crc kubenswrapper[4985]: E0127 09:06:14.239625 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="extract" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.239631 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="extract" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.239711 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbd0472-2548-4ceb-8f40-a8586eb223e2" containerName="extract" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.240131 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.242312 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.242599 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.242885 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xgmtv" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.262055 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cxv4p"] Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.410078 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jfvw\" (UniqueName: \"kubernetes.io/projected/d85df060-af54-4914-9ae7-dd1d6e0a66f1-kube-api-access-4jfvw\") pod \"nmstate-operator-646758c888-cxv4p\" (UID: \"d85df060-af54-4914-9ae7-dd1d6e0a66f1\") " pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.511782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jfvw\" (UniqueName: \"kubernetes.io/projected/d85df060-af54-4914-9ae7-dd1d6e0a66f1-kube-api-access-4jfvw\") pod \"nmstate-operator-646758c888-cxv4p\" (UID: \"d85df060-af54-4914-9ae7-dd1d6e0a66f1\") " pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.536446 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jfvw\" (UniqueName: \"kubernetes.io/projected/d85df060-af54-4914-9ae7-dd1d6e0a66f1-kube-api-access-4jfvw\") pod \"nmstate-operator-646758c888-cxv4p\" (UID: \"d85df060-af54-4914-9ae7-dd1d6e0a66f1\") " pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.555798 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" Jan 27 09:06:14 crc kubenswrapper[4985]: I0127 09:06:14.791465 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cxv4p"] Jan 27 09:06:15 crc kubenswrapper[4985]: I0127 09:06:15.781658 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" event={"ID":"d85df060-af54-4914-9ae7-dd1d6e0a66f1","Type":"ContainerStarted","Data":"4da5fa68e482cf1c08f78d4df6048bc88e6287fd29463d8d508822d651fbb3c8"} Jan 27 09:06:17 crc kubenswrapper[4985]: I0127 09:06:17.798305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" event={"ID":"d85df060-af54-4914-9ae7-dd1d6e0a66f1","Type":"ContainerStarted","Data":"c230078d027dd5e33f86b8fd427174f2be95ba76efb0299c00e09b507d708889"} Jan 27 09:06:17 crc kubenswrapper[4985]: I0127 09:06:17.822462 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cxv4p" podStartSLOduration=1.760638471 podStartE2EDuration="3.822432783s" podCreationTimestamp="2026-01-27 09:06:14 +0000 UTC" firstStartedPulling="2026-01-27 09:06:14.802280672 +0000 UTC m=+759.093375513" lastFinishedPulling="2026-01-27 09:06:16.864074984 +0000 UTC m=+761.155169825" observedRunningTime="2026-01-27 09:06:17.818422042 +0000 UTC m=+762.109516903" watchObservedRunningTime="2026-01-27 09:06:17.822432783 +0000 UTC m=+762.113527624" Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.920843 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kzgx2"] Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.922473 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.925952 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6wgqz" Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.943821 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj"] Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.945045 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.948339 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.953818 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kzgx2"] Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.959806 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj"] Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.974612 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8thh5"] Jan 27 09:06:18 crc kubenswrapper[4985]: I0127 09:06:18.975298 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19081401-9a2f-458b-a902-d15f4f915e1c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085549 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dsl\" (UniqueName: \"kubernetes.io/projected/c154b891-72be-4911-93e9-3abe7346c05a-kube-api-access-82dsl\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-nmstate-lock\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085714 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-dbus-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7lp\" (UniqueName: \"kubernetes.io/projected/b65e2660-2543-4137-b394-e0a2b19a17c8-kube-api-access-pj7lp\") pod \"nmstate-metrics-54757c584b-kzgx2\" (UID: \"b65e2660-2543-4137-b394-e0a2b19a17c8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085776 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgd78\" (UniqueName: \"kubernetes.io/projected/19081401-9a2f-458b-a902-d15f4f915e1c-kube-api-access-sgd78\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.085798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-ovs-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.086103 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr"] Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.086747 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.095445 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.095728 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.095859 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vsrn8" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.104438 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr"] Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-dbus-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7lp\" (UniqueName: \"kubernetes.io/projected/b65e2660-2543-4137-b394-e0a2b19a17c8-kube-api-access-pj7lp\") pod \"nmstate-metrics-54757c584b-kzgx2\" (UID: \"b65e2660-2543-4137-b394-e0a2b19a17c8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187547 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgd78\" (UniqueName: \"kubernetes.io/projected/19081401-9a2f-458b-a902-d15f4f915e1c-kube-api-access-sgd78\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-ovs-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187629 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdv9\" (UniqueName: \"kubernetes.io/projected/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-kube-api-access-lrdv9\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19081401-9a2f-458b-a902-d15f4f915e1c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187753 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-ovs-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-dbus-socket\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.187889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dsl\" (UniqueName: \"kubernetes.io/projected/c154b891-72be-4911-93e9-3abe7346c05a-kube-api-access-82dsl\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.188090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-nmstate-lock\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.188142 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.188171 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c154b891-72be-4911-93e9-3abe7346c05a-nmstate-lock\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.199702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19081401-9a2f-458b-a902-d15f4f915e1c-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.208219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgd78\" (UniqueName: \"kubernetes.io/projected/19081401-9a2f-458b-a902-d15f4f915e1c-kube-api-access-sgd78\") pod \"nmstate-webhook-8474b5b9d8-rs7zj\" (UID: \"19081401-9a2f-458b-a902-d15f4f915e1c\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.219988 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7lp\" (UniqueName: \"kubernetes.io/projected/b65e2660-2543-4137-b394-e0a2b19a17c8-kube-api-access-pj7lp\") pod \"nmstate-metrics-54757c584b-kzgx2\" (UID: \"b65e2660-2543-4137-b394-e0a2b19a17c8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.224773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dsl\" (UniqueName: \"kubernetes.io/projected/c154b891-72be-4911-93e9-3abe7346c05a-kube-api-access-82dsl\") pod \"nmstate-handler-8thh5\" (UID: \"c154b891-72be-4911-93e9-3abe7346c05a\") " pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.242030 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.268741 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.290249 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: E0127 09:06:19.290484 4985 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.290447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: E0127 09:06:19.290601 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert podName:effff8aa-ef99-4976-b4fc-da8a1d1ab03f nodeName:}" failed. No retries permitted until 2026-01-27 09:06:19.790573044 +0000 UTC m=+764.081668055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-fgkxr" (UID: "effff8aa-ef99-4976-b4fc-da8a1d1ab03f") : secret "plugin-serving-cert" not found Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.290636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdv9\" (UniqueName: \"kubernetes.io/projected/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-kube-api-access-lrdv9\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.291584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.304066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.322312 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdv9\" (UniqueName: \"kubernetes.io/projected/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-kube-api-access-lrdv9\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.329236 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cbd496d8b-66k4j"] Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.335895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.393394 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cbd496d8b-66k4j"] Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.493875 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.493957 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-oauth-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.493999 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-trusted-ca-bundle\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.494036 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dxm\" (UniqueName: \"kubernetes.io/projected/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-kube-api-access-74dxm\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.494061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-oauth-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.494089 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-service-ca\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.494117 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.525120 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kzgx2"] Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597143 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-oauth-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-trusted-ca-bundle\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dxm\" (UniqueName: \"kubernetes.io/projected/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-kube-api-access-74dxm\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597207 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-oauth-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597230 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-service-ca\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.597250 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.598918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-trusted-ca-bundle\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.599174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-oauth-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.599735 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.600264 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-service-ca\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.604658 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-serving-cert\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.605169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-console-oauth-config\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.621293 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dxm\" (UniqueName: \"kubernetes.io/projected/7da9f2ca-5e84-43a2-bd7b-0df7b7e56375-kube-api-access-74dxm\") pod \"console-7cbd496d8b-66k4j\" (UID: \"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375\") " pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.634697 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj"] Jan 27 09:06:19 crc kubenswrapper[4985]: W0127 09:06:19.637697 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19081401_9a2f_458b_a902_d15f4f915e1c.slice/crio-02baebff3f67473a6790e12d919177c3c12f5bdddd9417c708b8741c94ac9723 WatchSource:0}: Error finding container 02baebff3f67473a6790e12d919177c3c12f5bdddd9417c708b8741c94ac9723: Status 404 returned error can't find the container with id 02baebff3f67473a6790e12d919177c3c12f5bdddd9417c708b8741c94ac9723 Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.667740 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.800304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.804962 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/effff8aa-ef99-4976-b4fc-da8a1d1ab03f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fgkxr\" (UID: \"effff8aa-ef99-4976-b4fc-da8a1d1ab03f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.810825 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" event={"ID":"19081401-9a2f-458b-a902-d15f4f915e1c","Type":"ContainerStarted","Data":"02baebff3f67473a6790e12d919177c3c12f5bdddd9417c708b8741c94ac9723"} Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.812018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" event={"ID":"b65e2660-2543-4137-b394-e0a2b19a17c8","Type":"ContainerStarted","Data":"37b97a3e107b094ef83a2a457c3618b4c3d008fc819b91f3cf39deef08c90dcc"} Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.813954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8thh5" event={"ID":"c154b891-72be-4911-93e9-3abe7346c05a","Type":"ContainerStarted","Data":"9c5c74042d0a7de010b8b4ccf1d18b454a45ebd7900bf879e52834cf82c561a5"} Jan 27 09:06:19 crc kubenswrapper[4985]: I0127 09:06:19.872942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cbd496d8b-66k4j"] Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.036705 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.321077 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr"] Jan 27 09:06:20 crc kubenswrapper[4985]: W0127 09:06:20.330748 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeffff8aa_ef99_4976_b4fc_da8a1d1ab03f.slice/crio-b79c6e392d7290087d3fd125c44c8c661675d2d435d1394444a3493888832775 WatchSource:0}: Error finding container b79c6e392d7290087d3fd125c44c8c661675d2d435d1394444a3493888832775: Status 404 returned error can't find the container with id b79c6e392d7290087d3fd125c44c8c661675d2d435d1394444a3493888832775 Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.822568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" event={"ID":"effff8aa-ef99-4976-b4fc-da8a1d1ab03f","Type":"ContainerStarted","Data":"b79c6e392d7290087d3fd125c44c8c661675d2d435d1394444a3493888832775"} Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.825759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbd496d8b-66k4j" event={"ID":"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375","Type":"ContainerStarted","Data":"c6a879fd3bf98964db00569c0fea6c2e005a0c4014bdb1b2256d46228953ed7d"} Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.825807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbd496d8b-66k4j" event={"ID":"7da9f2ca-5e84-43a2-bd7b-0df7b7e56375","Type":"ContainerStarted","Data":"9ea60e1fa8f281b2c8b5ea10303c1e39d7c25f4595edcbe07a4e5c66c2e7707e"} Jan 27 09:06:20 crc kubenswrapper[4985]: I0127 09:06:20.849279 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cbd496d8b-66k4j" podStartSLOduration=1.849244235 podStartE2EDuration="1.849244235s" podCreationTimestamp="2026-01-27 09:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:06:20.843704523 +0000 UTC m=+765.134799364" watchObservedRunningTime="2026-01-27 09:06:20.849244235 +0000 UTC m=+765.140339076" Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.093914 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.837173 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8thh5" event={"ID":"c154b891-72be-4911-93e9-3abe7346c05a","Type":"ContainerStarted","Data":"1cf539c2d474d220c3bdf91ba6e7fa27b0242c30a008a6850c402e170b9cbc94"} Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.837619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.838455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" event={"ID":"19081401-9a2f-458b-a902-d15f4f915e1c","Type":"ContainerStarted","Data":"2ee3ed87169038257a6ca2adb2b94bf872ca695dcb9d6ded33b4af3b1e641560"} Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.839197 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.841033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" event={"ID":"b65e2660-2543-4137-b394-e0a2b19a17c8","Type":"ContainerStarted","Data":"bf2fa451c9795193ebb190b307a5ca18bfe6a2e49f4ac157d09085e026ddce50"} Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.851547 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8thh5" podStartSLOduration=2.28498518 podStartE2EDuration="4.851531487s" podCreationTimestamp="2026-01-27 09:06:18 +0000 UTC" firstStartedPulling="2026-01-27 09:06:19.375384978 +0000 UTC m=+763.666479819" lastFinishedPulling="2026-01-27 09:06:21.941931285 +0000 UTC m=+766.233026126" observedRunningTime="2026-01-27 09:06:22.850793207 +0000 UTC m=+767.141888058" watchObservedRunningTime="2026-01-27 09:06:22.851531487 +0000 UTC m=+767.142626328" Jan 27 09:06:22 crc kubenswrapper[4985]: I0127 09:06:22.868958 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" podStartSLOduration=2.54402375 podStartE2EDuration="4.868939854s" podCreationTimestamp="2026-01-27 09:06:18 +0000 UTC" firstStartedPulling="2026-01-27 09:06:19.641120622 +0000 UTC m=+763.932215463" lastFinishedPulling="2026-01-27 09:06:21.966036716 +0000 UTC m=+766.257131567" observedRunningTime="2026-01-27 09:06:22.865940982 +0000 UTC m=+767.157035823" watchObservedRunningTime="2026-01-27 09:06:22.868939854 +0000 UTC m=+767.160034695" Jan 27 09:06:23 crc kubenswrapper[4985]: I0127 09:06:23.850224 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" event={"ID":"effff8aa-ef99-4976-b4fc-da8a1d1ab03f","Type":"ContainerStarted","Data":"166288ca2ba7d905d1f8c910a263b4bb4542f771183b882342b5b7de3f4d1bec"} Jan 27 09:06:23 crc kubenswrapper[4985]: I0127 09:06:23.868256 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fgkxr" podStartSLOduration=1.865140311 podStartE2EDuration="4.868234024s" podCreationTimestamp="2026-01-27 09:06:19 +0000 UTC" firstStartedPulling="2026-01-27 09:06:20.337080737 +0000 UTC m=+764.628175578" lastFinishedPulling="2026-01-27 09:06:23.34017446 +0000 UTC m=+767.631269291" observedRunningTime="2026-01-27 09:06:23.86807088 +0000 UTC m=+768.159165741" watchObservedRunningTime="2026-01-27 09:06:23.868234024 +0000 UTC m=+768.159328855" Jan 27 09:06:25 crc kubenswrapper[4985]: I0127 09:06:25.866246 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" event={"ID":"b65e2660-2543-4137-b394-e0a2b19a17c8","Type":"ContainerStarted","Data":"6ec12198c7155c54702510ac29115f1f51f7ca2e814d35d97d1d0d3395935cbc"} Jan 27 09:06:25 crc kubenswrapper[4985]: I0127 09:06:25.889739 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-kzgx2" podStartSLOduration=2.364557329 podStartE2EDuration="7.889711281s" podCreationTimestamp="2026-01-27 09:06:18 +0000 UTC" firstStartedPulling="2026-01-27 09:06:19.536092382 +0000 UTC m=+763.827187223" lastFinishedPulling="2026-01-27 09:06:25.061246334 +0000 UTC m=+769.352341175" observedRunningTime="2026-01-27 09:06:25.885151367 +0000 UTC m=+770.176246198" watchObservedRunningTime="2026-01-27 09:06:25.889711281 +0000 UTC m=+770.180806122" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.326893 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8thh5" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.668189 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.668791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.678015 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.899849 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cbd496d8b-66k4j" Jan 27 09:06:29 crc kubenswrapper[4985]: I0127 09:06:29.957859 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 09:06:39 crc kubenswrapper[4985]: I0127 09:06:39.276557 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rs7zj" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.279157 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.281715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.289626 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.396224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf779\" (UniqueName: \"kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.396370 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.396405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.497333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.497386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.497437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf779\" (UniqueName: \"kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.497871 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.497999 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.519877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf779\" (UniqueName: \"kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779\") pod \"redhat-marketplace-l9kwv\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.602671 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:40 crc kubenswrapper[4985]: I0127 09:06:40.990830 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:41 crc kubenswrapper[4985]: I0127 09:06:41.249493 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerStarted","Data":"7ef669465cb667dc11190dcfedcfcb561afea9dd55d4d0885a3cede5726c92a5"} Jan 27 09:06:41 crc kubenswrapper[4985]: I0127 09:06:41.828160 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:06:41 crc kubenswrapper[4985]: I0127 09:06:41.828220 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:06:42 crc kubenswrapper[4985]: I0127 09:06:42.255793 4985 generic.go:334] "Generic (PLEG): container finished" podID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerID="aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b" exitCode=0 Jan 27 09:06:42 crc kubenswrapper[4985]: I0127 09:06:42.255839 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerDied","Data":"aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b"} Jan 27 09:06:44 crc kubenswrapper[4985]: I0127 09:06:44.271221 4985 generic.go:334] "Generic (PLEG): container finished" podID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerID="bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53" exitCode=0 Jan 27 09:06:44 crc kubenswrapper[4985]: I0127 09:06:44.271277 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerDied","Data":"bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53"} Jan 27 09:06:45 crc kubenswrapper[4985]: I0127 09:06:45.278859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerStarted","Data":"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139"} Jan 27 09:06:45 crc kubenswrapper[4985]: I0127 09:06:45.303943 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l9kwv" podStartSLOduration=2.515149302 podStartE2EDuration="5.303900031s" podCreationTimestamp="2026-01-27 09:06:40 +0000 UTC" firstStartedPulling="2026-01-27 09:06:42.257780488 +0000 UTC m=+786.548875329" lastFinishedPulling="2026-01-27 09:06:45.046531217 +0000 UTC m=+789.337626058" observedRunningTime="2026-01-27 09:06:45.299363626 +0000 UTC m=+789.590458487" watchObservedRunningTime="2026-01-27 09:06:45.303900031 +0000 UTC m=+789.594994872" Jan 27 09:06:50 crc kubenswrapper[4985]: I0127 09:06:50.603737 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:50 crc kubenswrapper[4985]: I0127 09:06:50.604666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:50 crc kubenswrapper[4985]: I0127 09:06:50.657986 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:51 crc kubenswrapper[4985]: I0127 09:06:51.381321 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:51 crc kubenswrapper[4985]: I0127 09:06:51.434865 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.758690 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r"] Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.765926 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.770253 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.773180 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r"] Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.888285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtq7n\" (UniqueName: \"kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.888377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.888434 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.989884 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtq7n\" (UniqueName: \"kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.989972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.990024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.990770 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:52 crc kubenswrapper[4985]: I0127 09:06:52.990856 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.013760 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtq7n\" (UniqueName: \"kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.092853 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.329480 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r"] Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.334776 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l9kwv" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="registry-server" containerID="cri-o://085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139" gracePeriod=2 Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.650078 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.803913 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities\") pod \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.804043 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf779\" (UniqueName: \"kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779\") pod \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.804090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content\") pod \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\" (UID: \"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de\") " Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.805764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities" (OuterVolumeSpecName: "utilities") pod "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" (UID: "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.811007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779" (OuterVolumeSpecName: "kube-api-access-tf779") pod "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" (UID: "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de"). InnerVolumeSpecName "kube-api-access-tf779". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.836905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" (UID: "f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.906104 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.906147 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf779\" (UniqueName: \"kubernetes.io/projected/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-kube-api-access-tf779\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:53 crc kubenswrapper[4985]: I0127 09:06:53.906159 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.351116 4985 generic.go:334] "Generic (PLEG): container finished" podID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerID="085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139" exitCode=0 Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.351190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerDied","Data":"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139"} Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.351258 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9kwv" event={"ID":"f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de","Type":"ContainerDied","Data":"7ef669465cb667dc11190dcfedcfcb561afea9dd55d4d0885a3cede5726c92a5"} Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.351296 4985 scope.go:117] "RemoveContainer" containerID="085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.351299 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9kwv" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.355361 4985 generic.go:334] "Generic (PLEG): container finished" podID="81071259-96da-4e05-a63b-b0e5544489ec" containerID="e42e18da8387e5e0776283b1707db02161fe4894bb624672622a360f29966f44" exitCode=0 Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.355423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" event={"ID":"81071259-96da-4e05-a63b-b0e5544489ec","Type":"ContainerDied","Data":"e42e18da8387e5e0776283b1707db02161fe4894bb624672622a360f29966f44"} Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.355465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" event={"ID":"81071259-96da-4e05-a63b-b0e5544489ec","Type":"ContainerStarted","Data":"5185c718d83ad9d95bb6cefb92b028c8d0cae30eba2af29b9d4bccbde53dbad5"} Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.375404 4985 scope.go:117] "RemoveContainer" containerID="bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.396133 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.408777 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9kwv"] Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.410385 4985 scope.go:117] "RemoveContainer" containerID="aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.428089 4985 scope.go:117] "RemoveContainer" containerID="085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139" Jan 27 09:06:54 crc kubenswrapper[4985]: E0127 09:06:54.428696 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139\": container with ID starting with 085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139 not found: ID does not exist" containerID="085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.428730 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139"} err="failed to get container status \"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139\": rpc error: code = NotFound desc = could not find container \"085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139\": container with ID starting with 085f3a6a03fcaa75d39ebfa93f140b82bea5255fcd4d2187f52314d3bccb9139 not found: ID does not exist" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.428757 4985 scope.go:117] "RemoveContainer" containerID="bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53" Jan 27 09:06:54 crc kubenswrapper[4985]: E0127 09:06:54.429183 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53\": container with ID starting with bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53 not found: ID does not exist" containerID="bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.429214 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53"} err="failed to get container status \"bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53\": rpc error: code = NotFound desc = could not find container \"bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53\": container with ID starting with bb2740c57cd4e20bd1c0eb952beaf8968cb9182d967394f8babd293afef9ab53 not found: ID does not exist" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.429236 4985 scope.go:117] "RemoveContainer" containerID="aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b" Jan 27 09:06:54 crc kubenswrapper[4985]: E0127 09:06:54.429492 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b\": container with ID starting with aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b not found: ID does not exist" containerID="aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.429552 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b"} err="failed to get container status \"aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b\": rpc error: code = NotFound desc = could not find container \"aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b\": container with ID starting with aa6609b389f7cf0489ad109d882d0e568cdd840f615b7b294d8bdaf0bec5607b not found: ID does not exist" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.459973 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" path="/var/lib/kubelet/pods/f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de/volumes" Jan 27 09:06:54 crc kubenswrapper[4985]: I0127 09:06:54.998738 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-q7dv9" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" containerID="cri-o://44db6cd45c04a3f8ef36cd7980b452fe87ba2462eb54dc804a87733a68c32c3f" gracePeriod=15 Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.366761 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q7dv9_5bd4e7de-4244-4c33-90eb-799159106b7b/console/0.log" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.367182 4985 generic.go:334] "Generic (PLEG): container finished" podID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerID="44db6cd45c04a3f8ef36cd7980b452fe87ba2462eb54dc804a87733a68c32c3f" exitCode=2 Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.367216 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q7dv9" event={"ID":"5bd4e7de-4244-4c33-90eb-799159106b7b","Type":"ContainerDied","Data":"44db6cd45c04a3f8ef36cd7980b452fe87ba2462eb54dc804a87733a68c32c3f"} Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.367247 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q7dv9" event={"ID":"5bd4e7de-4244-4c33-90eb-799159106b7b","Type":"ContainerDied","Data":"7547242e47a9941b7cb9d4cd3bd26d8a4544f5de690d239d819064d87ef5eada"} Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.367259 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7547242e47a9941b7cb9d4cd3bd26d8a4544f5de690d239d819064d87ef5eada" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.395777 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q7dv9_5bd4e7de-4244-4c33-90eb-799159106b7b/console/0.log" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.395903 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531406 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531504 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531589 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531651 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531752 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmv58\" (UniqueName: \"kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531796 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.531853 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca\") pod \"5bd4e7de-4244-4c33-90eb-799159106b7b\" (UID: \"5bd4e7de-4244-4c33-90eb-799159106b7b\") " Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.532928 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config" (OuterVolumeSpecName: "console-config") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.533130 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca" (OuterVolumeSpecName: "service-ca") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.533131 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.533381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.538713 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.538849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.539829 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58" (OuterVolumeSpecName: "kube-api-access-kmv58") pod "5bd4e7de-4244-4c33-90eb-799159106b7b" (UID: "5bd4e7de-4244-4c33-90eb-799159106b7b"). InnerVolumeSpecName "kube-api-access-kmv58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.633985 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634054 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634068 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634081 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634093 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmv58\" (UniqueName: \"kubernetes.io/projected/5bd4e7de-4244-4c33-90eb-799159106b7b-kube-api-access-kmv58\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634108 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bd4e7de-4244-4c33-90eb-799159106b7b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.634122 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bd4e7de-4244-4c33-90eb-799159106b7b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.710786 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:06:55 crc kubenswrapper[4985]: E0127 09:06:55.711234 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="extract-utilities" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711257 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="extract-utilities" Jan 27 09:06:55 crc kubenswrapper[4985]: E0127 09:06:55.711275 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="extract-content" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711282 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="extract-content" Jan 27 09:06:55 crc kubenswrapper[4985]: E0127 09:06:55.711290 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="registry-server" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711297 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="registry-server" Jan 27 09:06:55 crc kubenswrapper[4985]: E0127 09:06:55.711313 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711319 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711457 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9db6fe8-4ea9-4ba1-ac52-a1c6fcd476de" containerName="registry-server" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.711471 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" containerName="console" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.712666 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.716382 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.837203 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.837308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4tk\" (UniqueName: \"kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.837629 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.938641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.938710 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.938749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4tk\" (UniqueName: \"kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.940219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.940217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:55 crc kubenswrapper[4985]: I0127 09:06:55.956504 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4tk\" (UniqueName: \"kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk\") pod \"redhat-operators-8h4vl\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.042486 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:06:56 crc kubenswrapper[4985]: E0127 09:06:56.044523 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81071259_96da_4e05_a63b_b0e5544489ec.slice/crio-conmon-6cb64ddc5a924ec79c63fa4d6a264f384f67d21b80d617e6da20f7d7359b8722.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.382726 4985 generic.go:334] "Generic (PLEG): container finished" podID="81071259-96da-4e05-a63b-b0e5544489ec" containerID="6cb64ddc5a924ec79c63fa4d6a264f384f67d21b80d617e6da20f7d7359b8722" exitCode=0 Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.382852 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" event={"ID":"81071259-96da-4e05-a63b-b0e5544489ec","Type":"ContainerDied","Data":"6cb64ddc5a924ec79c63fa4d6a264f384f67d21b80d617e6da20f7d7359b8722"} Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.383488 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q7dv9" Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.430616 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.446190 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-q7dv9"] Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.463575 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd4e7de-4244-4c33-90eb-799159106b7b" path="/var/lib/kubelet/pods/5bd4e7de-4244-4c33-90eb-799159106b7b/volumes" Jan 27 09:06:56 crc kubenswrapper[4985]: I0127 09:06:56.487903 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:06:56 crc kubenswrapper[4985]: W0127 09:06:56.494766 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3610e93_6459_4ab2_835c_720ab8086f2d.slice/crio-5362aa61f9f734df778d3af006b7f284a3c510e87ec60e00f448fc1e1cd58402 WatchSource:0}: Error finding container 5362aa61f9f734df778d3af006b7f284a3c510e87ec60e00f448fc1e1cd58402: Status 404 returned error can't find the container with id 5362aa61f9f734df778d3af006b7f284a3c510e87ec60e00f448fc1e1cd58402 Jan 27 09:06:57 crc kubenswrapper[4985]: I0127 09:06:57.393885 4985 generic.go:334] "Generic (PLEG): container finished" podID="81071259-96da-4e05-a63b-b0e5544489ec" containerID="f0457c6731e58a53af5879c9aef335663964443a5448ed9e8ee7f31cda752b90" exitCode=0 Jan 27 09:06:57 crc kubenswrapper[4985]: I0127 09:06:57.394002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" event={"ID":"81071259-96da-4e05-a63b-b0e5544489ec","Type":"ContainerDied","Data":"f0457c6731e58a53af5879c9aef335663964443a5448ed9e8ee7f31cda752b90"} Jan 27 09:06:57 crc kubenswrapper[4985]: I0127 09:06:57.398304 4985 generic.go:334] "Generic (PLEG): container finished" podID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerID="03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862" exitCode=0 Jan 27 09:06:57 crc kubenswrapper[4985]: I0127 09:06:57.398339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerDied","Data":"03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862"} Jan 27 09:06:57 crc kubenswrapper[4985]: I0127 09:06:57.398374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerStarted","Data":"5362aa61f9f734df778d3af006b7f284a3c510e87ec60e00f448fc1e1cd58402"} Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.694449 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.787749 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtq7n\" (UniqueName: \"kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n\") pod \"81071259-96da-4e05-a63b-b0e5544489ec\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.788181 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util\") pod \"81071259-96da-4e05-a63b-b0e5544489ec\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.788243 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle\") pod \"81071259-96da-4e05-a63b-b0e5544489ec\" (UID: \"81071259-96da-4e05-a63b-b0e5544489ec\") " Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.789856 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle" (OuterVolumeSpecName: "bundle") pod "81071259-96da-4e05-a63b-b0e5544489ec" (UID: "81071259-96da-4e05-a63b-b0e5544489ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.797803 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n" (OuterVolumeSpecName: "kube-api-access-gtq7n") pod "81071259-96da-4e05-a63b-b0e5544489ec" (UID: "81071259-96da-4e05-a63b-b0e5544489ec"). InnerVolumeSpecName "kube-api-access-gtq7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.890775 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:58 crc kubenswrapper[4985]: I0127 09:06:58.890829 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtq7n\" (UniqueName: \"kubernetes.io/projected/81071259-96da-4e05-a63b-b0e5544489ec-kube-api-access-gtq7n\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.104497 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util" (OuterVolumeSpecName: "util") pod "81071259-96da-4e05-a63b-b0e5544489ec" (UID: "81071259-96da-4e05-a63b-b0e5544489ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.194854 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81071259-96da-4e05-a63b-b0e5544489ec-util\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.416878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" event={"ID":"81071259-96da-4e05-a63b-b0e5544489ec","Type":"ContainerDied","Data":"5185c718d83ad9d95bb6cefb92b028c8d0cae30eba2af29b9d4bccbde53dbad5"} Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.416926 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5185c718d83ad9d95bb6cefb92b028c8d0cae30eba2af29b9d4bccbde53dbad5" Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.417088 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r" Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.418927 4985 generic.go:334] "Generic (PLEG): container finished" podID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerID="2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86" exitCode=0 Jan 27 09:06:59 crc kubenswrapper[4985]: I0127 09:06:59.419031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerDied","Data":"2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86"} Jan 27 09:07:00 crc kubenswrapper[4985]: I0127 09:07:00.426363 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerStarted","Data":"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3"} Jan 27 09:07:00 crc kubenswrapper[4985]: I0127 09:07:00.450084 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8h4vl" podStartSLOduration=2.975611962 podStartE2EDuration="5.450065996s" podCreationTimestamp="2026-01-27 09:06:55 +0000 UTC" firstStartedPulling="2026-01-27 09:06:57.402844614 +0000 UTC m=+801.693939475" lastFinishedPulling="2026-01-27 09:06:59.877298668 +0000 UTC m=+804.168393509" observedRunningTime="2026-01-27 09:07:00.447222688 +0000 UTC m=+804.738317539" watchObservedRunningTime="2026-01-27 09:07:00.450065996 +0000 UTC m=+804.741160837" Jan 27 09:07:06 crc kubenswrapper[4985]: I0127 09:07:06.042618 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:06 crc kubenswrapper[4985]: I0127 09:07:06.043097 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.147457 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8h4vl" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="registry-server" probeResult="failure" output=< Jan 27 09:07:07 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 09:07:07 crc kubenswrapper[4985]: > Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.380364 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b"] Jan 27 09:07:07 crc kubenswrapper[4985]: E0127 09:07:07.380698 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="extract" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.380713 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="extract" Jan 27 09:07:07 crc kubenswrapper[4985]: E0127 09:07:07.380734 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="pull" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.380740 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="pull" Jan 27 09:07:07 crc kubenswrapper[4985]: E0127 09:07:07.380750 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="util" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.380757 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="util" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.380867 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="81071259-96da-4e05-a63b-b0e5544489ec" containerName="extract" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.381393 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.385972 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.386111 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zn8vl" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.386732 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.388101 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.389206 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.406000 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b"] Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.521587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-apiservice-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.521741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-webhook-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.521800 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzc86\" (UniqueName: \"kubernetes.io/projected/d031825c-2ec3-42ab-825a-25a071b0c80b-kube-api-access-pzc86\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.623242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-webhook-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.623322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzc86\" (UniqueName: \"kubernetes.io/projected/d031825c-2ec3-42ab-825a-25a071b0c80b-kube-api-access-pzc86\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.623384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-apiservice-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.632406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-webhook-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.638842 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d031825c-2ec3-42ab-825a-25a071b0c80b-apiservice-cert\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.644363 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzc86\" (UniqueName: \"kubernetes.io/projected/d031825c-2ec3-42ab-825a-25a071b0c80b-kube-api-access-pzc86\") pod \"metallb-operator-controller-manager-85f9fcd7-r846b\" (UID: \"d031825c-2ec3-42ab-825a-25a071b0c80b\") " pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.696983 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.780930 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-867db48886-7gj8n"] Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.782770 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.785403 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.786076 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.786406 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2wm2q" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.798289 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867db48886-7gj8n"] Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.936159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-webhook-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.936811 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-apiservice-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:07 crc kubenswrapper[4985]: I0127 09:07:07.936873 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9x4\" (UniqueName: \"kubernetes.io/projected/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-kube-api-access-zd9x4\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.031158 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b"] Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.037970 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9x4\" (UniqueName: \"kubernetes.io/projected/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-kube-api-access-zd9x4\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.038023 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-webhook-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.038072 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-apiservice-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.045003 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-apiservice-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.048478 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-webhook-cert\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.061365 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9x4\" (UniqueName: \"kubernetes.io/projected/c45cc1f7-8ab2-44c6-82de-bb59d24163b7-kube-api-access-zd9x4\") pod \"metallb-operator-webhook-server-867db48886-7gj8n\" (UID: \"c45cc1f7-8ab2-44c6-82de-bb59d24163b7\") " pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.100178 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.362647 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867db48886-7gj8n"] Jan 27 09:07:08 crc kubenswrapper[4985]: W0127 09:07:08.367064 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc45cc1f7_8ab2_44c6_82de_bb59d24163b7.slice/crio-6783a68d11fb9fdb3b4f17cc01d115f543f269fa7bc6d85499b5abed076b7f82 WatchSource:0}: Error finding container 6783a68d11fb9fdb3b4f17cc01d115f543f269fa7bc6d85499b5abed076b7f82: Status 404 returned error can't find the container with id 6783a68d11fb9fdb3b4f17cc01d115f543f269fa7bc6d85499b5abed076b7f82 Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.480649 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" event={"ID":"c45cc1f7-8ab2-44c6-82de-bb59d24163b7","Type":"ContainerStarted","Data":"6783a68d11fb9fdb3b4f17cc01d115f543f269fa7bc6d85499b5abed076b7f82"} Jan 27 09:07:08 crc kubenswrapper[4985]: I0127 09:07:08.481732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" event={"ID":"d031825c-2ec3-42ab-825a-25a071b0c80b","Type":"ContainerStarted","Data":"c13676dc515b3c3e819e86c11c1ac5dbf0c3588c3ebc3e8b99786051ecd6ad0a"} Jan 27 09:07:11 crc kubenswrapper[4985]: I0127 09:07:11.827901 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:07:11 crc kubenswrapper[4985]: I0127 09:07:11.828221 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.532457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" event={"ID":"c45cc1f7-8ab2-44c6-82de-bb59d24163b7","Type":"ContainerStarted","Data":"f5503ceb1d132f464806e1178851368ad776ec31d6ac5063feda063091f3a9d2"} Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.533521 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.535183 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" event={"ID":"d031825c-2ec3-42ab-825a-25a071b0c80b","Type":"ContainerStarted","Data":"3230bcbd4d4fa6e1eb1ad1a469a058aec3bf02a09d9310d07f8eba78eef9da4b"} Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.535400 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.579238 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" podStartSLOduration=2.178883289 podStartE2EDuration="8.579217397s" podCreationTimestamp="2026-01-27 09:07:07 +0000 UTC" firstStartedPulling="2026-01-27 09:07:08.370750408 +0000 UTC m=+812.661845249" lastFinishedPulling="2026-01-27 09:07:14.771084516 +0000 UTC m=+819.062179357" observedRunningTime="2026-01-27 09:07:15.556396092 +0000 UTC m=+819.847490953" watchObservedRunningTime="2026-01-27 09:07:15.579217397 +0000 UTC m=+819.870312238" Jan 27 09:07:15 crc kubenswrapper[4985]: I0127 09:07:15.579824 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" podStartSLOduration=1.867680908 podStartE2EDuration="8.579819684s" podCreationTimestamp="2026-01-27 09:07:07 +0000 UTC" firstStartedPulling="2026-01-27 09:07:08.045652427 +0000 UTC m=+812.336747268" lastFinishedPulling="2026-01-27 09:07:14.757791203 +0000 UTC m=+819.048886044" observedRunningTime="2026-01-27 09:07:15.578899619 +0000 UTC m=+819.869994500" watchObservedRunningTime="2026-01-27 09:07:15.579819684 +0000 UTC m=+819.870914525" Jan 27 09:07:16 crc kubenswrapper[4985]: I0127 09:07:16.091365 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:16 crc kubenswrapper[4985]: I0127 09:07:16.132402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:16 crc kubenswrapper[4985]: I0127 09:07:16.327620 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:07:17 crc kubenswrapper[4985]: I0127 09:07:17.546371 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8h4vl" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="registry-server" containerID="cri-o://53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3" gracePeriod=2 Jan 27 09:07:17 crc kubenswrapper[4985]: I0127 09:07:17.937552 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.097434 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln4tk\" (UniqueName: \"kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk\") pod \"d3610e93-6459-4ab2-835c-720ab8086f2d\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.097515 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities\") pod \"d3610e93-6459-4ab2-835c-720ab8086f2d\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.097554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content\") pod \"d3610e93-6459-4ab2-835c-720ab8086f2d\" (UID: \"d3610e93-6459-4ab2-835c-720ab8086f2d\") " Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.099245 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities" (OuterVolumeSpecName: "utilities") pod "d3610e93-6459-4ab2-835c-720ab8086f2d" (UID: "d3610e93-6459-4ab2-835c-720ab8086f2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.115007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk" (OuterVolumeSpecName: "kube-api-access-ln4tk") pod "d3610e93-6459-4ab2-835c-720ab8086f2d" (UID: "d3610e93-6459-4ab2-835c-720ab8086f2d"). InnerVolumeSpecName "kube-api-access-ln4tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.199041 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln4tk\" (UniqueName: \"kubernetes.io/projected/d3610e93-6459-4ab2-835c-720ab8086f2d-kube-api-access-ln4tk\") on node \"crc\" DevicePath \"\"" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.199070 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.211809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3610e93-6459-4ab2-835c-720ab8086f2d" (UID: "d3610e93-6459-4ab2-835c-720ab8086f2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.299791 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3610e93-6459-4ab2-835c-720ab8086f2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.554066 4985 generic.go:334] "Generic (PLEG): container finished" podID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerID="53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3" exitCode=0 Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.554116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerDied","Data":"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3"} Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.554141 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8h4vl" event={"ID":"d3610e93-6459-4ab2-835c-720ab8086f2d","Type":"ContainerDied","Data":"5362aa61f9f734df778d3af006b7f284a3c510e87ec60e00f448fc1e1cd58402"} Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.554158 4985 scope.go:117] "RemoveContainer" containerID="53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.554260 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8h4vl" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.575186 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.579345 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8h4vl"] Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.586732 4985 scope.go:117] "RemoveContainer" containerID="2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.623022 4985 scope.go:117] "RemoveContainer" containerID="03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.651391 4985 scope.go:117] "RemoveContainer" containerID="53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3" Jan 27 09:07:18 crc kubenswrapper[4985]: E0127 09:07:18.651750 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3\": container with ID starting with 53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3 not found: ID does not exist" containerID="53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.651780 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3"} err="failed to get container status \"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3\": rpc error: code = NotFound desc = could not find container \"53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3\": container with ID starting with 53b2237dd9feb55b11b6de2d73cb409971b184cb7746196c73da16ecd78ce5a3 not found: ID does not exist" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.651802 4985 scope.go:117] "RemoveContainer" containerID="2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86" Jan 27 09:07:18 crc kubenswrapper[4985]: E0127 09:07:18.652245 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86\": container with ID starting with 2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86 not found: ID does not exist" containerID="2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.652267 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86"} err="failed to get container status \"2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86\": rpc error: code = NotFound desc = could not find container \"2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86\": container with ID starting with 2ff1260b72d18a7a7aea324d1d052ab32e756f56843e104a34ee348ff1c61f86 not found: ID does not exist" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.652279 4985 scope.go:117] "RemoveContainer" containerID="03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862" Jan 27 09:07:18 crc kubenswrapper[4985]: E0127 09:07:18.652496 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862\": container with ID starting with 03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862 not found: ID does not exist" containerID="03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862" Jan 27 09:07:18 crc kubenswrapper[4985]: I0127 09:07:18.652517 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862"} err="failed to get container status \"03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862\": rpc error: code = NotFound desc = could not find container \"03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862\": container with ID starting with 03e34e85f56fa950317ed6a864d55f772f4a9f37baf49650146e28d50bd51862 not found: ID does not exist" Jan 27 09:07:20 crc kubenswrapper[4985]: I0127 09:07:20.466825 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" path="/var/lib/kubelet/pods/d3610e93-6459-4ab2-835c-720ab8086f2d/volumes" Jan 27 09:07:28 crc kubenswrapper[4985]: I0127 09:07:28.106180 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-867db48886-7gj8n" Jan 27 09:07:36 crc kubenswrapper[4985]: I0127 09:07:36.796768 4985 scope.go:117] "RemoveContainer" containerID="44db6cd45c04a3f8ef36cd7980b452fe87ba2462eb54dc804a87733a68c32c3f" Jan 27 09:07:41 crc kubenswrapper[4985]: I0127 09:07:41.828627 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:07:41 crc kubenswrapper[4985]: I0127 09:07:41.829243 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:07:41 crc kubenswrapper[4985]: I0127 09:07:41.829295 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:07:41 crc kubenswrapper[4985]: I0127 09:07:41.830010 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:07:41 crc kubenswrapper[4985]: I0127 09:07:41.830082 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302" gracePeriod=600 Jan 27 09:07:42 crc kubenswrapper[4985]: I0127 09:07:42.698105 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302" exitCode=0 Jan 27 09:07:42 crc kubenswrapper[4985]: I0127 09:07:42.698459 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302"} Jan 27 09:07:42 crc kubenswrapper[4985]: I0127 09:07:42.698492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b"} Jan 27 09:07:42 crc kubenswrapper[4985]: I0127 09:07:42.698530 4985 scope.go:117] "RemoveContainer" containerID="9feadc3b02691252615c9433b9fe2d9d45af231376e52663f8b1f7a17b547166" Jan 27 09:07:47 crc kubenswrapper[4985]: I0127 09:07:47.699288 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85f9fcd7-r846b" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.419163 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5"] Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.419653 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="extract-content" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.419664 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="extract-content" Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.419678 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="extract-utilities" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.419684 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="extract-utilities" Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.419700 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="registry-server" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.419705 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="registry-server" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.419810 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3610e93-6459-4ab2-835c-720ab8086f2d" containerName="registry-server" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.420176 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.427793 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.427854 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2bj95" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.432764 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qhg7r"] Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.435286 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.437630 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.437740 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.444390 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5"] Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.516893 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-sockets\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.516947 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-reloader\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517039 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d546a725-d293-47b7-a9c6-92988ba0060d-frr-startup\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517073 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wpb\" (UniqueName: \"kubernetes.io/projected/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-kube-api-access-75wpb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517112 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517129 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-conf\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517149 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdzx\" (UniqueName: \"kubernetes.io/projected/d546a725-d293-47b7-a9c6-92988ba0060d-kube-api-access-lvdzx\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517177 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.517204 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-metrics\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.523785 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nh6c4"] Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.526683 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.528325 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pw966" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.530357 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.530601 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.530776 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.533170 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-k8hcc"] Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.540019 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.541546 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.558975 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k8hcc"] Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618350 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metallb-excludel2\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618412 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-metrics-certs\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-kube-api-access-lszhh\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618468 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d546a725-d293-47b7-a9c6-92988ba0060d-frr-startup\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-cert\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618543 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wpb\" (UniqueName: \"kubernetes.io/projected/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-kube-api-access-75wpb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618599 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-conf\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618619 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdzx\" (UniqueName: \"kubernetes.io/projected/d546a725-d293-47b7-a9c6-92988ba0060d-kube-api-access-lvdzx\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618644 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618671 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-metrics\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618693 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metrics-certs\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618721 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-sockets\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-reloader\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.618778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jd7\" (UniqueName: \"kubernetes.io/projected/4bc4a510-3768-4111-8646-d3d4a0d6a70e-kube-api-access-42jd7\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.618779 4985 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.618928 4985 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.619006 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs podName:d546a725-d293-47b7-a9c6-92988ba0060d nodeName:}" failed. No retries permitted until 2026-01-27 09:07:49.11885118 +0000 UTC m=+853.409946031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs") pod "frr-k8s-qhg7r" (UID: "d546a725-d293-47b7-a9c6-92988ba0060d") : secret "frr-k8s-certs-secret" not found Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.619024 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert podName:f9a30d10-2c17-412f-bb84-dd9bc6bf4487 nodeName:}" failed. No retries permitted until 2026-01-27 09:07:49.119014994 +0000 UTC m=+853.410109835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert") pod "frr-k8s-webhook-server-7df86c4f6c-qnpz5" (UID: "f9a30d10-2c17-412f-bb84-dd9bc6bf4487") : secret "frr-k8s-webhook-server-cert" not found Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.619217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-conf\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.619289 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-metrics\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.619300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-reloader\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.619335 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d546a725-d293-47b7-a9c6-92988ba0060d-frr-sockets\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.620085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d546a725-d293-47b7-a9c6-92988ba0060d-frr-startup\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.638442 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wpb\" (UniqueName: \"kubernetes.io/projected/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-kube-api-access-75wpb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.641637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdzx\" (UniqueName: \"kubernetes.io/projected/d546a725-d293-47b7-a9c6-92988ba0060d-kube-api-access-lvdzx\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.720032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metrics-certs\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.720108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jd7\" (UniqueName: \"kubernetes.io/projected/4bc4a510-3768-4111-8646-d3d4a0d6a70e-kube-api-access-42jd7\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.720143 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.720167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metallb-excludel2\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.720214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-metrics-certs\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.720977 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.721055 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metallb-excludel2\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: E0127 09:07:48.721067 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist podName:4bc4a510-3768-4111-8646-d3d4a0d6a70e nodeName:}" failed. No retries permitted until 2026-01-27 09:07:49.22104455 +0000 UTC m=+853.512139391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist") pod "speaker-nh6c4" (UID: "4bc4a510-3768-4111-8646-d3d4a0d6a70e") : secret "metallb-memberlist" not found Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.721148 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-kube-api-access-lszhh\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.721172 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-cert\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.723096 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.723638 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-metrics-certs\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.735743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-cert\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.736305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-metrics-certs\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.738879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jd7\" (UniqueName: \"kubernetes.io/projected/4bc4a510-3768-4111-8646-d3d4a0d6a70e-kube-api-access-42jd7\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.747566 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/2cf49ce6-0799-4fb6-bf66-ff00cdf92c44-kube-api-access-lszhh\") pod \"controller-6968d8fdc4-k8hcc\" (UID: \"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44\") " pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:48 crc kubenswrapper[4985]: I0127 09:07:48.861798 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.126273 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.126656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.130645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d546a725-d293-47b7-a9c6-92988ba0060d-metrics-certs\") pod \"frr-k8s-qhg7r\" (UID: \"d546a725-d293-47b7-a9c6-92988ba0060d\") " pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.130691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9a30d10-2c17-412f-bb84-dd9bc6bf4487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qnpz5\" (UID: \"f9a30d10-2c17-412f-bb84-dd9bc6bf4487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.227736 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:49 crc kubenswrapper[4985]: E0127 09:07:49.227898 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 09:07:49 crc kubenswrapper[4985]: E0127 09:07:49.227962 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist podName:4bc4a510-3768-4111-8646-d3d4a0d6a70e nodeName:}" failed. No retries permitted until 2026-01-27 09:07:50.227944681 +0000 UTC m=+854.519039522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist") pod "speaker-nh6c4" (UID: "4bc4a510-3768-4111-8646-d3d4a0d6a70e") : secret "metallb-memberlist" not found Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.277946 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k8hcc"] Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.339707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.359813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.578327 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5"] Jan 27 09:07:49 crc kubenswrapper[4985]: W0127 09:07:49.581995 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9a30d10_2c17_412f_bb84_dd9bc6bf4487.slice/crio-7b05be3dcc2a3f782ba5f955f9ec5f0b2d0c2c56bc1f85d40ccec461b3fb6f90 WatchSource:0}: Error finding container 7b05be3dcc2a3f782ba5f955f9ec5f0b2d0c2c56bc1f85d40ccec461b3fb6f90: Status 404 returned error can't find the container with id 7b05be3dcc2a3f782ba5f955f9ec5f0b2d0c2c56bc1f85d40ccec461b3fb6f90 Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.736441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"84ac1ee37f8c58e0b33a23f8facd6d1a86a5432dbc87e598f8c7bd1ddcf3a9de"} Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.737836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" event={"ID":"f9a30d10-2c17-412f-bb84-dd9bc6bf4487","Type":"ContainerStarted","Data":"7b05be3dcc2a3f782ba5f955f9ec5f0b2d0c2c56bc1f85d40ccec461b3fb6f90"} Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.740683 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8hcc" event={"ID":"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44","Type":"ContainerStarted","Data":"1e58a0523c3c98571410d94d1dbdd7b1fdeb68b91da2383ad7ba8ba41d4e3f16"} Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.740754 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8hcc" event={"ID":"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44","Type":"ContainerStarted","Data":"78ff9b789956a89813722c28e5a28a0d83fca5764079b35ce18a16ebd62da9db"} Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.740778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8hcc" event={"ID":"2cf49ce6-0799-4fb6-bf66-ff00cdf92c44","Type":"ContainerStarted","Data":"0136617ce0792ad3d36e215fa39437750c9facdaafeaae664ae1b8eeb9a916d6"} Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.740822 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:07:49 crc kubenswrapper[4985]: I0127 09:07:49.763281 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-k8hcc" podStartSLOduration=1.763225453 podStartE2EDuration="1.763225453s" podCreationTimestamp="2026-01-27 09:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:07:49.756836908 +0000 UTC m=+854.047931749" watchObservedRunningTime="2026-01-27 09:07:49.763225453 +0000 UTC m=+854.054320294" Jan 27 09:07:50 crc kubenswrapper[4985]: I0127 09:07:50.243091 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:50 crc kubenswrapper[4985]: I0127 09:07:50.252708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bc4a510-3768-4111-8646-d3d4a0d6a70e-memberlist\") pod \"speaker-nh6c4\" (UID: \"4bc4a510-3768-4111-8646-d3d4a0d6a70e\") " pod="metallb-system/speaker-nh6c4" Jan 27 09:07:50 crc kubenswrapper[4985]: I0127 09:07:50.352637 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nh6c4" Jan 27 09:07:50 crc kubenswrapper[4985]: W0127 09:07:50.388424 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc4a510_3768_4111_8646_d3d4a0d6a70e.slice/crio-c2a9020116979ad9e9589aa4788734fdc12f8fa82105dd093705204fcc4fff94 WatchSource:0}: Error finding container c2a9020116979ad9e9589aa4788734fdc12f8fa82105dd093705204fcc4fff94: Status 404 returned error can't find the container with id c2a9020116979ad9e9589aa4788734fdc12f8fa82105dd093705204fcc4fff94 Jan 27 09:07:50 crc kubenswrapper[4985]: I0127 09:07:50.756582 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nh6c4" event={"ID":"4bc4a510-3768-4111-8646-d3d4a0d6a70e","Type":"ContainerStarted","Data":"cdaaa3a5285b39cceee2e73ca6494d96df37cd97157aa7bb67d9b90bc4bbd44e"} Jan 27 09:07:50 crc kubenswrapper[4985]: I0127 09:07:50.756633 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nh6c4" event={"ID":"4bc4a510-3768-4111-8646-d3d4a0d6a70e","Type":"ContainerStarted","Data":"c2a9020116979ad9e9589aa4788734fdc12f8fa82105dd093705204fcc4fff94"} Jan 27 09:07:51 crc kubenswrapper[4985]: I0127 09:07:51.766993 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nh6c4" event={"ID":"4bc4a510-3768-4111-8646-d3d4a0d6a70e","Type":"ContainerStarted","Data":"51ed7eb57e01b1727f923555a3d7c6447871fe44baa6a22024fac6d9cf7907b2"} Jan 27 09:07:51 crc kubenswrapper[4985]: I0127 09:07:51.767768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nh6c4" Jan 27 09:07:51 crc kubenswrapper[4985]: I0127 09:07:51.791894 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nh6c4" podStartSLOduration=3.791864967 podStartE2EDuration="3.791864967s" podCreationTimestamp="2026-01-27 09:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:07:51.790918412 +0000 UTC m=+856.082013273" watchObservedRunningTime="2026-01-27 09:07:51.791864967 +0000 UTC m=+856.082959808" Jan 27 09:07:57 crc kubenswrapper[4985]: I0127 09:07:57.804684 4985 generic.go:334] "Generic (PLEG): container finished" podID="d546a725-d293-47b7-a9c6-92988ba0060d" containerID="91269b1381b9a9c831c360b67bd27a64e7b5dd990427f97ced16aa447398b165" exitCode=0 Jan 27 09:07:57 crc kubenswrapper[4985]: I0127 09:07:57.804806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerDied","Data":"91269b1381b9a9c831c360b67bd27a64e7b5dd990427f97ced16aa447398b165"} Jan 27 09:07:57 crc kubenswrapper[4985]: I0127 09:07:57.807880 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" event={"ID":"f9a30d10-2c17-412f-bb84-dd9bc6bf4487","Type":"ContainerStarted","Data":"6697f9e22febf81915cb9dbf2efc67831c892c4d2500a5514402332dab82372e"} Jan 27 09:07:57 crc kubenswrapper[4985]: I0127 09:07:57.808408 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:07:57 crc kubenswrapper[4985]: I0127 09:07:57.844817 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" podStartSLOduration=2.274989248 podStartE2EDuration="9.844798943s" podCreationTimestamp="2026-01-27 09:07:48 +0000 UTC" firstStartedPulling="2026-01-27 09:07:49.58711315 +0000 UTC m=+853.878207991" lastFinishedPulling="2026-01-27 09:07:57.156922835 +0000 UTC m=+861.448017686" observedRunningTime="2026-01-27 09:07:57.844547256 +0000 UTC m=+862.135642107" watchObservedRunningTime="2026-01-27 09:07:57.844798943 +0000 UTC m=+862.135893784" Jan 27 09:07:58 crc kubenswrapper[4985]: I0127 09:07:58.815070 4985 generic.go:334] "Generic (PLEG): container finished" podID="d546a725-d293-47b7-a9c6-92988ba0060d" containerID="32c927c9160c5b72c682cd9302d817584319d868eb26a66d2fb8c9094f85f524" exitCode=0 Jan 27 09:07:58 crc kubenswrapper[4985]: I0127 09:07:58.815111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerDied","Data":"32c927c9160c5b72c682cd9302d817584319d868eb26a66d2fb8c9094f85f524"} Jan 27 09:07:59 crc kubenswrapper[4985]: I0127 09:07:59.823774 4985 generic.go:334] "Generic (PLEG): container finished" podID="d546a725-d293-47b7-a9c6-92988ba0060d" containerID="6d30a3ceec692e571dd41eafbe4ac82534afa4977a655683456d5109928041ed" exitCode=0 Jan 27 09:07:59 crc kubenswrapper[4985]: I0127 09:07:59.823878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerDied","Data":"6d30a3ceec692e571dd41eafbe4ac82534afa4977a655683456d5109928041ed"} Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.358003 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nh6c4" Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.834644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"98ac184c13f9d97f66f2a00c7c9ed516155fb5659d33b386109975060c39fa7b"} Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.834704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"8885ba31c653ff960f521fa639f05848fbf09dcf2f1928ebf5951cef424a3b64"} Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.834720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"f9bffd23a50a175d6efbccdf313d88a0b9e3737a329c6c1b26e4670f8ce11b97"} Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.834731 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"55e3643d93234f53d05d3cde863af049f1a1a53947e4b9eec654f23903b39119"} Jan 27 09:08:00 crc kubenswrapper[4985]: I0127 09:08:00.834741 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"18a750ca048b43312c4e5ce765131c413fb54f46ebf1c521aa9e282a130edf7d"} Jan 27 09:08:01 crc kubenswrapper[4985]: I0127 09:08:01.844559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qhg7r" event={"ID":"d546a725-d293-47b7-a9c6-92988ba0060d","Type":"ContainerStarted","Data":"daf35630aa50f148f9311113998d92538757c788715397dc572212e9f86647a5"} Jan 27 09:08:01 crc kubenswrapper[4985]: I0127 09:08:01.844768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:08:01 crc kubenswrapper[4985]: I0127 09:08:01.871693 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qhg7r" podStartSLOduration=6.223861577 podStartE2EDuration="13.871674496s" podCreationTimestamp="2026-01-27 09:07:48 +0000 UTC" firstStartedPulling="2026-01-27 09:07:49.512821857 +0000 UTC m=+853.803916698" lastFinishedPulling="2026-01-27 09:07:57.160634766 +0000 UTC m=+861.451729617" observedRunningTime="2026-01-27 09:08:01.871530752 +0000 UTC m=+866.162625613" watchObservedRunningTime="2026-01-27 09:08:01.871674496 +0000 UTC m=+866.162769337" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.367288 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.368552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.385088 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.386328 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-bc9lb" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.389196 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.389452 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.443074 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxvp\" (UniqueName: \"kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp\") pod \"openstack-operator-index-nrcbw\" (UID: \"97431985-22f9-41bf-b1e9-c4b5b6facd71\") " pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.544267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxvp\" (UniqueName: \"kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp\") pod \"openstack-operator-index-nrcbw\" (UID: \"97431985-22f9-41bf-b1e9-c4b5b6facd71\") " pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.590281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxvp\" (UniqueName: \"kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp\") pod \"openstack-operator-index-nrcbw\" (UID: \"97431985-22f9-41bf-b1e9-c4b5b6facd71\") " pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:03 crc kubenswrapper[4985]: I0127 09:08:03.688632 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:04 crc kubenswrapper[4985]: I0127 09:08:04.153916 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:04 crc kubenswrapper[4985]: W0127 09:08:04.159481 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97431985_22f9_41bf_b1e9_c4b5b6facd71.slice/crio-95f87c88ca909683b6f969fad00d9b56a703d18f6aa44599b3cc7a2b806493a2 WatchSource:0}: Error finding container 95f87c88ca909683b6f969fad00d9b56a703d18f6aa44599b3cc7a2b806493a2: Status 404 returned error can't find the container with id 95f87c88ca909683b6f969fad00d9b56a703d18f6aa44599b3cc7a2b806493a2 Jan 27 09:08:04 crc kubenswrapper[4985]: I0127 09:08:04.360488 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:08:04 crc kubenswrapper[4985]: I0127 09:08:04.396065 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:08:04 crc kubenswrapper[4985]: I0127 09:08:04.864277 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nrcbw" event={"ID":"97431985-22f9-41bf-b1e9-c4b5b6facd71","Type":"ContainerStarted","Data":"95f87c88ca909683b6f969fad00d9b56a703d18f6aa44599b3cc7a2b806493a2"} Jan 27 09:08:05 crc kubenswrapper[4985]: I0127 09:08:05.871601 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nrcbw" event={"ID":"97431985-22f9-41bf-b1e9-c4b5b6facd71","Type":"ContainerStarted","Data":"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8"} Jan 27 09:08:05 crc kubenswrapper[4985]: I0127 09:08:05.892384 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nrcbw" podStartSLOduration=2.062258127 podStartE2EDuration="2.892357817s" podCreationTimestamp="2026-01-27 09:08:03 +0000 UTC" firstStartedPulling="2026-01-27 09:08:04.161499003 +0000 UTC m=+868.452593844" lastFinishedPulling="2026-01-27 09:08:04.991598693 +0000 UTC m=+869.282693534" observedRunningTime="2026-01-27 09:08:05.889732235 +0000 UTC m=+870.180827076" watchObservedRunningTime="2026-01-27 09:08:05.892357817 +0000 UTC m=+870.183452668" Jan 27 09:08:07 crc kubenswrapper[4985]: I0127 09:08:07.325368 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:07 crc kubenswrapper[4985]: I0127 09:08:07.887702 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nrcbw" podUID="97431985-22f9-41bf-b1e9-c4b5b6facd71" containerName="registry-server" containerID="cri-o://252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8" gracePeriod=2 Jan 27 09:08:07 crc kubenswrapper[4985]: I0127 09:08:07.939856 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6jwj6"] Jan 27 09:08:07 crc kubenswrapper[4985]: I0127 09:08:07.943056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:07 crc kubenswrapper[4985]: I0127 09:08:07.952288 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6jwj6"] Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.018307 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lhc\" (UniqueName: \"kubernetes.io/projected/d457eb3e-f84d-4308-b57f-82ac43a05335-kube-api-access-b5lhc\") pod \"openstack-operator-index-6jwj6\" (UID: \"d457eb3e-f84d-4308-b57f-82ac43a05335\") " pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.120438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lhc\" (UniqueName: \"kubernetes.io/projected/d457eb3e-f84d-4308-b57f-82ac43a05335-kube-api-access-b5lhc\") pod \"openstack-operator-index-6jwj6\" (UID: \"d457eb3e-f84d-4308-b57f-82ac43a05335\") " pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.141877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lhc\" (UniqueName: \"kubernetes.io/projected/d457eb3e-f84d-4308-b57f-82ac43a05335-kube-api-access-b5lhc\") pod \"openstack-operator-index-6jwj6\" (UID: \"d457eb3e-f84d-4308-b57f-82ac43a05335\") " pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.287972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.294425 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.324085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdxvp\" (UniqueName: \"kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp\") pod \"97431985-22f9-41bf-b1e9-c4b5b6facd71\" (UID: \"97431985-22f9-41bf-b1e9-c4b5b6facd71\") " Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.334970 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp" (OuterVolumeSpecName: "kube-api-access-bdxvp") pod "97431985-22f9-41bf-b1e9-c4b5b6facd71" (UID: "97431985-22f9-41bf-b1e9-c4b5b6facd71"). InnerVolumeSpecName "kube-api-access-bdxvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.426034 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdxvp\" (UniqueName: \"kubernetes.io/projected/97431985-22f9-41bf-b1e9-c4b5b6facd71-kube-api-access-bdxvp\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.743219 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6jwj6"] Jan 27 09:08:08 crc kubenswrapper[4985]: W0127 09:08:08.751368 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd457eb3e_f84d_4308_b57f_82ac43a05335.slice/crio-52c229f173919cb1fe812e3a1e3159ad108081ca0e9352a17e1748d6d92cf0fe WatchSource:0}: Error finding container 52c229f173919cb1fe812e3a1e3159ad108081ca0e9352a17e1748d6d92cf0fe: Status 404 returned error can't find the container with id 52c229f173919cb1fe812e3a1e3159ad108081ca0e9352a17e1748d6d92cf0fe Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.867189 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-k8hcc" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.904208 4985 generic.go:334] "Generic (PLEG): container finished" podID="97431985-22f9-41bf-b1e9-c4b5b6facd71" containerID="252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8" exitCode=0 Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.904353 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nrcbw" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.904416 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nrcbw" event={"ID":"97431985-22f9-41bf-b1e9-c4b5b6facd71","Type":"ContainerDied","Data":"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8"} Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.904491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nrcbw" event={"ID":"97431985-22f9-41bf-b1e9-c4b5b6facd71","Type":"ContainerDied","Data":"95f87c88ca909683b6f969fad00d9b56a703d18f6aa44599b3cc7a2b806493a2"} Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.904550 4985 scope.go:117] "RemoveContainer" containerID="252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.908896 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6jwj6" event={"ID":"d457eb3e-f84d-4308-b57f-82ac43a05335","Type":"ContainerStarted","Data":"52c229f173919cb1fe812e3a1e3159ad108081ca0e9352a17e1748d6d92cf0fe"} Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.931218 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.936064 4985 scope.go:117] "RemoveContainer" containerID="252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8" Jan 27 09:08:08 crc kubenswrapper[4985]: E0127 09:08:08.936682 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8\": container with ID starting with 252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8 not found: ID does not exist" containerID="252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8" Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.936689 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nrcbw"] Jan 27 09:08:08 crc kubenswrapper[4985]: I0127 09:08:08.936735 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8"} err="failed to get container status \"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8\": rpc error: code = NotFound desc = could not find container \"252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8\": container with ID starting with 252078825f33292f580e9d41784f2298049e342ba56d2ed1cd3307fda1dea4e8 not found: ID does not exist" Jan 27 09:08:09 crc kubenswrapper[4985]: I0127 09:08:09.346006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qnpz5" Jan 27 09:08:09 crc kubenswrapper[4985]: I0127 09:08:09.364239 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qhg7r" Jan 27 09:08:09 crc kubenswrapper[4985]: I0127 09:08:09.921935 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6jwj6" event={"ID":"d457eb3e-f84d-4308-b57f-82ac43a05335","Type":"ContainerStarted","Data":"ba3dbe719503d9f403cfa5f149f39c1539674e530fa2703196830ce262a59257"} Jan 27 09:08:09 crc kubenswrapper[4985]: I0127 09:08:09.946942 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6jwj6" podStartSLOduration=2.5643546390000003 podStartE2EDuration="2.94691384s" podCreationTimestamp="2026-01-27 09:08:07 +0000 UTC" firstStartedPulling="2026-01-27 09:08:08.757542019 +0000 UTC m=+873.048636860" lastFinishedPulling="2026-01-27 09:08:09.14010121 +0000 UTC m=+873.431196061" observedRunningTime="2026-01-27 09:08:09.944127943 +0000 UTC m=+874.235222824" watchObservedRunningTime="2026-01-27 09:08:09.94691384 +0000 UTC m=+874.238008691" Jan 27 09:08:10 crc kubenswrapper[4985]: I0127 09:08:10.481277 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97431985-22f9-41bf-b1e9-c4b5b6facd71" path="/var/lib/kubelet/pods/97431985-22f9-41bf-b1e9-c4b5b6facd71/volumes" Jan 27 09:08:13 crc kubenswrapper[4985]: I0127 09:08:13.934696 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:13 crc kubenswrapper[4985]: E0127 09:08:13.935012 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97431985-22f9-41bf-b1e9-c4b5b6facd71" containerName="registry-server" Jan 27 09:08:13 crc kubenswrapper[4985]: I0127 09:08:13.935027 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="97431985-22f9-41bf-b1e9-c4b5b6facd71" containerName="registry-server" Jan 27 09:08:13 crc kubenswrapper[4985]: I0127 09:08:13.935181 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="97431985-22f9-41bf-b1e9-c4b5b6facd71" containerName="registry-server" Jan 27 09:08:13 crc kubenswrapper[4985]: I0127 09:08:13.936352 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.002562 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.028457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.028556 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5rx\" (UniqueName: \"kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.028597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.130178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5rx\" (UniqueName: \"kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.130269 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.130346 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.131848 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.131881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.150634 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5rx\" (UniqueName: \"kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx\") pod \"certified-operators-hvstq\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.265079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.719241 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:14 crc kubenswrapper[4985]: W0127 09:08:14.722710 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e1eaca_d76b_492d_9385_845bb33d54a4.slice/crio-46e78f7495275f07311d5a1a54ef67a711781da34a208628f6d7ed83534d1ab6 WatchSource:0}: Error finding container 46e78f7495275f07311d5a1a54ef67a711781da34a208628f6d7ed83534d1ab6: Status 404 returned error can't find the container with id 46e78f7495275f07311d5a1a54ef67a711781da34a208628f6d7ed83534d1ab6 Jan 27 09:08:14 crc kubenswrapper[4985]: I0127 09:08:14.965779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerStarted","Data":"46e78f7495275f07311d5a1a54ef67a711781da34a208628f6d7ed83534d1ab6"} Jan 27 09:08:15 crc kubenswrapper[4985]: I0127 09:08:15.976996 4985 generic.go:334] "Generic (PLEG): container finished" podID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerID="be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132" exitCode=0 Jan 27 09:08:15 crc kubenswrapper[4985]: I0127 09:08:15.977119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerDied","Data":"be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132"} Jan 27 09:08:16 crc kubenswrapper[4985]: I0127 09:08:16.989604 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerStarted","Data":"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160"} Jan 27 09:08:18 crc kubenswrapper[4985]: I0127 09:08:18.004174 4985 generic.go:334] "Generic (PLEG): container finished" podID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerID="cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160" exitCode=0 Jan 27 09:08:18 crc kubenswrapper[4985]: I0127 09:08:18.004753 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerDied","Data":"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160"} Jan 27 09:08:18 crc kubenswrapper[4985]: I0127 09:08:18.289135 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:18 crc kubenswrapper[4985]: I0127 09:08:18.289308 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:18 crc kubenswrapper[4985]: I0127 09:08:18.324921 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:19 crc kubenswrapper[4985]: I0127 09:08:19.014573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerStarted","Data":"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae"} Jan 27 09:08:19 crc kubenswrapper[4985]: I0127 09:08:19.040044 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hvstq" podStartSLOduration=3.414096866 podStartE2EDuration="6.040012658s" podCreationTimestamp="2026-01-27 09:08:13 +0000 UTC" firstStartedPulling="2026-01-27 09:08:15.981698954 +0000 UTC m=+880.272793835" lastFinishedPulling="2026-01-27 09:08:18.607614786 +0000 UTC m=+882.898709627" observedRunningTime="2026-01-27 09:08:19.033744966 +0000 UTC m=+883.324839817" watchObservedRunningTime="2026-01-27 09:08:19.040012658 +0000 UTC m=+883.331107509" Jan 27 09:08:19 crc kubenswrapper[4985]: I0127 09:08:19.051809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6jwj6" Jan 27 09:08:24 crc kubenswrapper[4985]: I0127 09:08:24.266460 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:24 crc kubenswrapper[4985]: I0127 09:08:24.267049 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:24 crc kubenswrapper[4985]: I0127 09:08:24.314374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:25 crc kubenswrapper[4985]: I0127 09:08:25.101398 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:25 crc kubenswrapper[4985]: I0127 09:08:25.525446 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.583888 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6"] Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.586331 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.595161 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6"] Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.631420 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m84bq" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.645969 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.646215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.646317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmhh\" (UniqueName: \"kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.747146 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.747209 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmhh\" (UniqueName: \"kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.747253 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.747789 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.748148 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.767351 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmhh\" (UniqueName: \"kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:26 crc kubenswrapper[4985]: I0127 09:08:26.943850 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.074085 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hvstq" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="registry-server" containerID="cri-o://32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae" gracePeriod=2 Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.370350 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6"] Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.424097 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.463318 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content\") pod \"c1e1eaca-d76b-492d-9385-845bb33d54a4\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.463413 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities\") pod \"c1e1eaca-d76b-492d-9385-845bb33d54a4\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.463552 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j5rx\" (UniqueName: \"kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx\") pod \"c1e1eaca-d76b-492d-9385-845bb33d54a4\" (UID: \"c1e1eaca-d76b-492d-9385-845bb33d54a4\") " Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.464860 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities" (OuterVolumeSpecName: "utilities") pod "c1e1eaca-d76b-492d-9385-845bb33d54a4" (UID: "c1e1eaca-d76b-492d-9385-845bb33d54a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.471771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx" (OuterVolumeSpecName: "kube-api-access-5j5rx") pod "c1e1eaca-d76b-492d-9385-845bb33d54a4" (UID: "c1e1eaca-d76b-492d-9385-845bb33d54a4"). InnerVolumeSpecName "kube-api-access-5j5rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.530932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1e1eaca-d76b-492d-9385-845bb33d54a4" (UID: "c1e1eaca-d76b-492d-9385-845bb33d54a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.564994 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j5rx\" (UniqueName: \"kubernetes.io/projected/c1e1eaca-d76b-492d-9385-845bb33d54a4-kube-api-access-5j5rx\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.565027 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:27 crc kubenswrapper[4985]: I0127 09:08:27.565038 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e1eaca-d76b-492d-9385-845bb33d54a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.082867 4985 generic.go:334] "Generic (PLEG): container finished" podID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerID="268e0ecb5d23be02a69720e473125a51058f65689f7433f9ad47acd437bec5b5" exitCode=0 Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.082951 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" event={"ID":"125d6a2f-ae94-4748-8c5d-b3788983b9c7","Type":"ContainerDied","Data":"268e0ecb5d23be02a69720e473125a51058f65689f7433f9ad47acd437bec5b5"} Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.083322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" event={"ID":"125d6a2f-ae94-4748-8c5d-b3788983b9c7","Type":"ContainerStarted","Data":"7f80821ad77102bb5833e9df7c910c351537cd453d0dca0c18eee1ba5f0e214b"} Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.086079 4985 generic.go:334] "Generic (PLEG): container finished" podID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerID="32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae" exitCode=0 Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.086161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerDied","Data":"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae"} Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.086188 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvstq" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.086216 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvstq" event={"ID":"c1e1eaca-d76b-492d-9385-845bb33d54a4","Type":"ContainerDied","Data":"46e78f7495275f07311d5a1a54ef67a711781da34a208628f6d7ed83534d1ab6"} Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.086248 4985 scope.go:117] "RemoveContainer" containerID="32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.106159 4985 scope.go:117] "RemoveContainer" containerID="cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.132308 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.142443 4985 scope.go:117] "RemoveContainer" containerID="be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.143748 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hvstq"] Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.155637 4985 scope.go:117] "RemoveContainer" containerID="32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae" Jan 27 09:08:28 crc kubenswrapper[4985]: E0127 09:08:28.156492 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae\": container with ID starting with 32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae not found: ID does not exist" containerID="32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.156556 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae"} err="failed to get container status \"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae\": rpc error: code = NotFound desc = could not find container \"32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae\": container with ID starting with 32281a22e5a43920d2316c9f8fb0f782c06028979ee115e963ef3904be6e86ae not found: ID does not exist" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.156588 4985 scope.go:117] "RemoveContainer" containerID="cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160" Jan 27 09:08:28 crc kubenswrapper[4985]: E0127 09:08:28.156905 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160\": container with ID starting with cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160 not found: ID does not exist" containerID="cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.156951 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160"} err="failed to get container status \"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160\": rpc error: code = NotFound desc = could not find container \"cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160\": container with ID starting with cdcd433a7f0c828bd096eddd7021984cf6f6b383c1374d46fcd3de7d449b5160 not found: ID does not exist" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.156979 4985 scope.go:117] "RemoveContainer" containerID="be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132" Jan 27 09:08:28 crc kubenswrapper[4985]: E0127 09:08:28.157304 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132\": container with ID starting with be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132 not found: ID does not exist" containerID="be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.157340 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132"} err="failed to get container status \"be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132\": rpc error: code = NotFound desc = could not find container \"be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132\": container with ID starting with be508436e184d9baf782f720501347238c2ab01ae30caac91e3f73a4c2f82132 not found: ID does not exist" Jan 27 09:08:28 crc kubenswrapper[4985]: I0127 09:08:28.459297 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" path="/var/lib/kubelet/pods/c1e1eaca-d76b-492d-9385-845bb33d54a4/volumes" Jan 27 09:08:29 crc kubenswrapper[4985]: I0127 09:08:29.106446 4985 generic.go:334] "Generic (PLEG): container finished" podID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerID="d9a3c398c8c95c36848dd7d67a363d1a2c7bb3fe78724b97ea249c77d71952b2" exitCode=0 Jan 27 09:08:29 crc kubenswrapper[4985]: I0127 09:08:29.106561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" event={"ID":"125d6a2f-ae94-4748-8c5d-b3788983b9c7","Type":"ContainerDied","Data":"d9a3c398c8c95c36848dd7d67a363d1a2c7bb3fe78724b97ea249c77d71952b2"} Jan 27 09:08:30 crc kubenswrapper[4985]: I0127 09:08:30.118622 4985 generic.go:334] "Generic (PLEG): container finished" podID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerID="913171c87363e72b29e9d212f778ed2ad5d0397e63f041b29b38bb1afba9a27d" exitCode=0 Jan 27 09:08:30 crc kubenswrapper[4985]: I0127 09:08:30.118732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" event={"ID":"125d6a2f-ae94-4748-8c5d-b3788983b9c7","Type":"ContainerDied","Data":"913171c87363e72b29e9d212f778ed2ad5d0397e63f041b29b38bb1afba9a27d"} Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.415607 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.519477 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmhh\" (UniqueName: \"kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh\") pod \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.519602 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util\") pod \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.519714 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle\") pod \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\" (UID: \"125d6a2f-ae94-4748-8c5d-b3788983b9c7\") " Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.520945 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle" (OuterVolumeSpecName: "bundle") pod "125d6a2f-ae94-4748-8c5d-b3788983b9c7" (UID: "125d6a2f-ae94-4748-8c5d-b3788983b9c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.525223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh" (OuterVolumeSpecName: "kube-api-access-8rmhh") pod "125d6a2f-ae94-4748-8c5d-b3788983b9c7" (UID: "125d6a2f-ae94-4748-8c5d-b3788983b9c7"). InnerVolumeSpecName "kube-api-access-8rmhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.535732 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util" (OuterVolumeSpecName: "util") pod "125d6a2f-ae94-4748-8c5d-b3788983b9c7" (UID: "125d6a2f-ae94-4748-8c5d-b3788983b9c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.621990 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rmhh\" (UniqueName: \"kubernetes.io/projected/125d6a2f-ae94-4748-8c5d-b3788983b9c7-kube-api-access-8rmhh\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.622050 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-util\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:31 crc kubenswrapper[4985]: I0127 09:08:31.622061 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/125d6a2f-ae94-4748-8c5d-b3788983b9c7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:08:32 crc kubenswrapper[4985]: I0127 09:08:32.147835 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" event={"ID":"125d6a2f-ae94-4748-8c5d-b3788983b9c7","Type":"ContainerDied","Data":"7f80821ad77102bb5833e9df7c910c351537cd453d0dca0c18eee1ba5f0e214b"} Jan 27 09:08:32 crc kubenswrapper[4985]: I0127 09:08:32.147892 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6" Jan 27 09:08:32 crc kubenswrapper[4985]: I0127 09:08:32.147895 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f80821ad77102bb5833e9df7c910c351537cd453d0dca0c18eee1ba5f0e214b" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.574820 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw"] Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575108 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="extract" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575124 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="extract" Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575137 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="extract-utilities" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575146 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="extract-utilities" Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575170 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="util" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575177 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="util" Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575191 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="registry-server" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575198 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="registry-server" Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575212 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="pull" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575219 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="pull" Jan 27 09:08:36 crc kubenswrapper[4985]: E0127 09:08:36.575231 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="extract-content" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575238 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="extract-content" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575380 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e1eaca-d76b-492d-9385-845bb33d54a4" containerName="registry-server" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575394 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="125d6a2f-ae94-4748-8c5d-b3788983b9c7" containerName="extract" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.575850 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.579765 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-96d42" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.597624 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvq4m\" (UniqueName: \"kubernetes.io/projected/8a307c4b-92d8-478d-8376-6db40e90a2ae-kube-api-access-zvq4m\") pod \"openstack-operator-controller-init-6bfcf7b875-b87hw\" (UID: \"8a307c4b-92d8-478d-8376-6db40e90a2ae\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.607428 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw"] Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.699464 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvq4m\" (UniqueName: \"kubernetes.io/projected/8a307c4b-92d8-478d-8376-6db40e90a2ae-kube-api-access-zvq4m\") pod \"openstack-operator-controller-init-6bfcf7b875-b87hw\" (UID: \"8a307c4b-92d8-478d-8376-6db40e90a2ae\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.727169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvq4m\" (UniqueName: \"kubernetes.io/projected/8a307c4b-92d8-478d-8376-6db40e90a2ae-kube-api-access-zvq4m\") pod \"openstack-operator-controller-init-6bfcf7b875-b87hw\" (UID: \"8a307c4b-92d8-478d-8376-6db40e90a2ae\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:36 crc kubenswrapper[4985]: I0127 09:08:36.895585 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:37 crc kubenswrapper[4985]: I0127 09:08:37.484422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw"] Jan 27 09:08:38 crc kubenswrapper[4985]: I0127 09:08:38.194816 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" event={"ID":"8a307c4b-92d8-478d-8376-6db40e90a2ae","Type":"ContainerStarted","Data":"c49a915dd7b33b8b4cf7d1ce9677c369c4a2233fe4b1cfb2a8ec6097f14812b9"} Jan 27 09:08:42 crc kubenswrapper[4985]: I0127 09:08:42.221306 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" event={"ID":"8a307c4b-92d8-478d-8376-6db40e90a2ae","Type":"ContainerStarted","Data":"16e1fbb9e060d9586f95e8938150a1f9b0d71e59a67e7991257eb42102d6068b"} Jan 27 09:08:42 crc kubenswrapper[4985]: I0127 09:08:42.221834 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:42 crc kubenswrapper[4985]: I0127 09:08:42.256129 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" podStartSLOduration=1.9542072190000002 podStartE2EDuration="6.256105835s" podCreationTimestamp="2026-01-27 09:08:36 +0000 UTC" firstStartedPulling="2026-01-27 09:08:37.506362353 +0000 UTC m=+901.797457204" lastFinishedPulling="2026-01-27 09:08:41.808260979 +0000 UTC m=+906.099355820" observedRunningTime="2026-01-27 09:08:42.253855014 +0000 UTC m=+906.544949855" watchObservedRunningTime="2026-01-27 09:08:42.256105835 +0000 UTC m=+906.547200676" Jan 27 09:08:46 crc kubenswrapper[4985]: I0127 09:08:46.898964 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-b87hw" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.476027 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.477924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.486826 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.592799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.593064 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.593148 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsr42\" (UniqueName: \"kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.694935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.694996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.695019 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsr42\" (UniqueName: \"kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.695875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.696157 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.713596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsr42\" (UniqueName: \"kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42\") pod \"community-operators-jdk9l\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:50 crc kubenswrapper[4985]: I0127 09:08:50.830346 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:08:51 crc kubenswrapper[4985]: I0127 09:08:51.312363 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:08:52 crc kubenswrapper[4985]: I0127 09:08:52.282850 4985 generic.go:334] "Generic (PLEG): container finished" podID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerID="fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c" exitCode=0 Jan 27 09:08:52 crc kubenswrapper[4985]: I0127 09:08:52.283346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerDied","Data":"fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c"} Jan 27 09:08:52 crc kubenswrapper[4985]: I0127 09:08:52.284702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerStarted","Data":"947dc86a2adec10f7f63b9399a8936a87c7ef43b5c99b8c4f82b8915c24b3408"} Jan 27 09:08:54 crc kubenswrapper[4985]: I0127 09:08:54.297581 4985 generic.go:334] "Generic (PLEG): container finished" podID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerID="5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826" exitCode=0 Jan 27 09:08:54 crc kubenswrapper[4985]: I0127 09:08:54.297733 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerDied","Data":"5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826"} Jan 27 09:08:55 crc kubenswrapper[4985]: I0127 09:08:55.336435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerStarted","Data":"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f"} Jan 27 09:08:55 crc kubenswrapper[4985]: I0127 09:08:55.359129 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdk9l" podStartSLOduration=2.700183541 podStartE2EDuration="5.359111559s" podCreationTimestamp="2026-01-27 09:08:50 +0000 UTC" firstStartedPulling="2026-01-27 09:08:52.284577719 +0000 UTC m=+916.575672560" lastFinishedPulling="2026-01-27 09:08:54.943505737 +0000 UTC m=+919.234600578" observedRunningTime="2026-01-27 09:08:55.358041759 +0000 UTC m=+919.649136610" watchObservedRunningTime="2026-01-27 09:08:55.359111559 +0000 UTC m=+919.650206400" Jan 27 09:09:00 crc kubenswrapper[4985]: I0127 09:09:00.830873 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:00 crc kubenswrapper[4985]: I0127 09:09:00.831394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:00 crc kubenswrapper[4985]: I0127 09:09:00.867812 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:01 crc kubenswrapper[4985]: I0127 09:09:01.412528 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:01 crc kubenswrapper[4985]: I0127 09:09:01.461828 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.386740 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdk9l" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="registry-server" containerID="cri-o://9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f" gracePeriod=2 Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.790984 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.871495 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsr42\" (UniqueName: \"kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42\") pod \"486e9921-d4e5-49a9-882e-cf5f591c9740\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.871573 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content\") pod \"486e9921-d4e5-49a9-882e-cf5f591c9740\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.871710 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities\") pod \"486e9921-d4e5-49a9-882e-cf5f591c9740\" (UID: \"486e9921-d4e5-49a9-882e-cf5f591c9740\") " Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.873584 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities" (OuterVolumeSpecName: "utilities") pod "486e9921-d4e5-49a9-882e-cf5f591c9740" (UID: "486e9921-d4e5-49a9-882e-cf5f591c9740"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.877305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42" (OuterVolumeSpecName: "kube-api-access-zsr42") pod "486e9921-d4e5-49a9-882e-cf5f591c9740" (UID: "486e9921-d4e5-49a9-882e-cf5f591c9740"). InnerVolumeSpecName "kube-api-access-zsr42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.936252 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "486e9921-d4e5-49a9-882e-cf5f591c9740" (UID: "486e9921-d4e5-49a9-882e-cf5f591c9740"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.976886 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.976986 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486e9921-d4e5-49a9-882e-cf5f591c9740-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:09:03 crc kubenswrapper[4985]: I0127 09:09:03.977012 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsr42\" (UniqueName: \"kubernetes.io/projected/486e9921-d4e5-49a9-882e-cf5f591c9740-kube-api-access-zsr42\") on node \"crc\" DevicePath \"\"" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.419131 4985 generic.go:334] "Generic (PLEG): container finished" podID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerID="9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f" exitCode=0 Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.419431 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdk9l" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.419492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerDied","Data":"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f"} Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.420021 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdk9l" event={"ID":"486e9921-d4e5-49a9-882e-cf5f591c9740","Type":"ContainerDied","Data":"947dc86a2adec10f7f63b9399a8936a87c7ef43b5c99b8c4f82b8915c24b3408"} Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.420068 4985 scope.go:117] "RemoveContainer" containerID="9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.445113 4985 scope.go:117] "RemoveContainer" containerID="5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.471046 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.471085 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdk9l"] Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.484015 4985 scope.go:117] "RemoveContainer" containerID="fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.501647 4985 scope.go:117] "RemoveContainer" containerID="9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f" Jan 27 09:09:04 crc kubenswrapper[4985]: E0127 09:09:04.502250 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f\": container with ID starting with 9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f not found: ID does not exist" containerID="9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.502300 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f"} err="failed to get container status \"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f\": rpc error: code = NotFound desc = could not find container \"9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f\": container with ID starting with 9c2d976f5eb76b7dac12573642c1066740d7c2d175ce540d66f83edae3dea01f not found: ID does not exist" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.502358 4985 scope.go:117] "RemoveContainer" containerID="5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826" Jan 27 09:09:04 crc kubenswrapper[4985]: E0127 09:09:04.503708 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826\": container with ID starting with 5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826 not found: ID does not exist" containerID="5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.503906 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826"} err="failed to get container status \"5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826\": rpc error: code = NotFound desc = could not find container \"5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826\": container with ID starting with 5527916e9773bbbb43561a31c76ff67581b1f623568bcaa87d5131b92c28b826 not found: ID does not exist" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.503940 4985 scope.go:117] "RemoveContainer" containerID="fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c" Jan 27 09:09:04 crc kubenswrapper[4985]: E0127 09:09:04.504217 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c\": container with ID starting with fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c not found: ID does not exist" containerID="fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c" Jan 27 09:09:04 crc kubenswrapper[4985]: I0127 09:09:04.504241 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c"} err="failed to get container status \"fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c\": rpc error: code = NotFound desc = could not find container \"fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c\": container with ID starting with fbabf0e8dfc99e114472a08742e956e0da8e7887b14fefd9796360d0396efa0c not found: ID does not exist" Jan 27 09:09:06 crc kubenswrapper[4985]: I0127 09:09:06.458908 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" path="/var/lib/kubelet/pods/486e9921-d4e5-49a9-882e-cf5f591c9740/volumes" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.780434 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv"] Jan 27 09:09:25 crc kubenswrapper[4985]: E0127 09:09:25.781740 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="extract-utilities" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.781763 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="extract-utilities" Jan 27 09:09:25 crc kubenswrapper[4985]: E0127 09:09:25.781786 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="registry-server" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.781795 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="registry-server" Jan 27 09:09:25 crc kubenswrapper[4985]: E0127 09:09:25.781808 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="extract-content" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.781816 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="extract-content" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.781976 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="486e9921-d4e5-49a9-882e-cf5f591c9740" containerName="registry-server" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.782704 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.784402 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mdnwn" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.790106 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.810270 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.811540 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.813790 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-d4mgg" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.829613 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.830732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.839218 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fbbl8" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.840885 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.865323 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.866373 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.870648 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xsfct" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.880136 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.900610 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.912620 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.914066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.918199 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.919059 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.921945 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7dlfk" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.926260 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jqz6v" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.928048 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.929386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxp2l\" (UniqueName: \"kubernetes.io/projected/451307f1-5d15-45d9-86c3-d45dc628d159-kube-api-access-bxp2l\") pod \"barbican-operator-controller-manager-75b8f798ff-xhnxv\" (UID: \"451307f1-5d15-45d9-86c3-d45dc628d159\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.929423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskhk\" (UniqueName: \"kubernetes.io/projected/9aed123f-6fb0-4c65-ac80-e926677d5ecc-kube-api-access-hskhk\") pod \"designate-operator-controller-manager-76d4d5b8f9-7r9cn\" (UID: \"9aed123f-6fb0-4c65-ac80-e926677d5ecc\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.929445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86rh\" (UniqueName: \"kubernetes.io/projected/62b33436-aec3-4e07-a880-cefb55ec47be-kube-api-access-k86rh\") pod \"cinder-operator-controller-manager-5fdc687f5-gbl76\" (UID: \"62b33436-aec3-4e07-a880-cefb55ec47be\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.933456 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.937863 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.939686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.942291 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.945284 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ds7lv" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.972437 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.973919 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.976903 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jsgnt" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.980552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.985159 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.986047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.991116 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84"] Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.991920 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.995498 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9g8k5" Jan 27 09:09:25 crc kubenswrapper[4985]: I0127 09:09:25.995750 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-msrm8" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.000205 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbk5\" (UniqueName: \"kubernetes.io/projected/c9b11814-36a8-4736-b144-358d8f2c7268-kube-api-access-phbk5\") pod \"heat-operator-controller-manager-658dd65b86-lgpgn\" (UID: \"c9b11814-36a8-4736-b144-358d8f2c7268\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030856 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2xl\" (UniqueName: \"kubernetes.io/projected/d51fc084-83b4-4f09-baa5-59842d67853e-kube-api-access-cw2xl\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-v7nj2\" (UID: \"d51fc084-83b4-4f09-baa5-59842d67853e\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29s6h\" (UniqueName: \"kubernetes.io/projected/457e511d-a1e8-453d-adfb-68177508f318-kube-api-access-29s6h\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030945 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxp2l\" (UniqueName: \"kubernetes.io/projected/451307f1-5d15-45d9-86c3-d45dc628d159-kube-api-access-bxp2l\") pod \"barbican-operator-controller-manager-75b8f798ff-xhnxv\" (UID: \"451307f1-5d15-45d9-86c3-d45dc628d159\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030967 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskhk\" (UniqueName: \"kubernetes.io/projected/9aed123f-6fb0-4c65-ac80-e926677d5ecc-kube-api-access-hskhk\") pod \"designate-operator-controller-manager-76d4d5b8f9-7r9cn\" (UID: \"9aed123f-6fb0-4c65-ac80-e926677d5ecc\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.030986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k86rh\" (UniqueName: \"kubernetes.io/projected/62b33436-aec3-4e07-a880-cefb55ec47be-kube-api-access-k86rh\") pod \"cinder-operator-controller-manager-5fdc687f5-gbl76\" (UID: \"62b33436-aec3-4e07-a880-cefb55ec47be\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.031010 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgcsq\" (UniqueName: \"kubernetes.io/projected/f3a7eea8-cdc7-40d1-a558-2ba1606c646a-kube-api-access-qgcsq\") pod \"glance-operator-controller-manager-84d5bb46b-tgqr5\" (UID: \"f3a7eea8-cdc7-40d1-a558-2ba1606c646a\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.031037 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.042079 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.057028 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.070030 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.071893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxp2l\" (UniqueName: \"kubernetes.io/projected/451307f1-5d15-45d9-86c3-d45dc628d159-kube-api-access-bxp2l\") pod \"barbican-operator-controller-manager-75b8f798ff-xhnxv\" (UID: \"451307f1-5d15-45d9-86c3-d45dc628d159\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.094649 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskhk\" (UniqueName: \"kubernetes.io/projected/9aed123f-6fb0-4c65-ac80-e926677d5ecc-kube-api-access-hskhk\") pod \"designate-operator-controller-manager-76d4d5b8f9-7r9cn\" (UID: \"9aed123f-6fb0-4c65-ac80-e926677d5ecc\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.095267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k86rh\" (UniqueName: \"kubernetes.io/projected/62b33436-aec3-4e07-a880-cefb55ec47be-kube-api-access-k86rh\") pod \"cinder-operator-controller-manager-5fdc687f5-gbl76\" (UID: \"62b33436-aec3-4e07-a880-cefb55ec47be\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.095450 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ss8ht" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.105053 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.112638 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132024 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxr6z\" (UniqueName: \"kubernetes.io/projected/d0f751b3-5f7d-4756-b959-960ebca3eeaf-kube-api-access-cxr6z\") pod \"keystone-operator-controller-manager-78f8b7b89c-g9wnq\" (UID: \"d0f751b3-5f7d-4756-b959-960ebca3eeaf\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132765 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phbk5\" (UniqueName: \"kubernetes.io/projected/c9b11814-36a8-4736-b144-358d8f2c7268-kube-api-access-phbk5\") pod \"heat-operator-controller-manager-658dd65b86-lgpgn\" (UID: \"c9b11814-36a8-4736-b144-358d8f2c7268\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132811 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2xl\" (UniqueName: \"kubernetes.io/projected/d51fc084-83b4-4f09-baa5-59842d67853e-kube-api-access-cw2xl\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-v7nj2\" (UID: \"d51fc084-83b4-4f09-baa5-59842d67853e\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132849 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkm2\" (UniqueName: \"kubernetes.io/projected/150467a4-4f48-49a7-9356-05b11babc187-kube-api-access-pjkm2\") pod \"ironic-operator-controller-manager-58865f87b4-v6n6c\" (UID: \"150467a4-4f48-49a7-9356-05b11babc187\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132917 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29s6h\" (UniqueName: \"kubernetes.io/projected/457e511d-a1e8-453d-adfb-68177508f318-kube-api-access-29s6h\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132946 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgcsq\" (UniqueName: \"kubernetes.io/projected/f3a7eea8-cdc7-40d1-a558-2ba1606c646a-kube-api-access-qgcsq\") pod \"glance-operator-controller-manager-84d5bb46b-tgqr5\" (UID: \"f3a7eea8-cdc7-40d1-a558-2ba1606c646a\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.132977 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnk9\" (UniqueName: \"kubernetes.io/projected/d79a41eb-b6b8-47c6-a14c-e2de4a932377-kube-api-access-ksnk9\") pod \"manila-operator-controller-manager-78b8f8fd84-jjq84\" (UID: \"d79a41eb-b6b8-47c6-a14c-e2de4a932377\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.133003 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.133127 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.133184 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:26.633163803 +0000 UTC m=+950.924258644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.155734 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.198006 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2xl\" (UniqueName: \"kubernetes.io/projected/d51fc084-83b4-4f09-baa5-59842d67853e-kube-api-access-cw2xl\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-v7nj2\" (UID: \"d51fc084-83b4-4f09-baa5-59842d67853e\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.200933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbk5\" (UniqueName: \"kubernetes.io/projected/c9b11814-36a8-4736-b144-358d8f2c7268-kube-api-access-phbk5\") pod \"heat-operator-controller-manager-658dd65b86-lgpgn\" (UID: \"c9b11814-36a8-4736-b144-358d8f2c7268\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.205066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgcsq\" (UniqueName: \"kubernetes.io/projected/f3a7eea8-cdc7-40d1-a558-2ba1606c646a-kube-api-access-qgcsq\") pod \"glance-operator-controller-manager-84d5bb46b-tgqr5\" (UID: \"f3a7eea8-cdc7-40d1-a558-2ba1606c646a\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.214590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29s6h\" (UniqueName: \"kubernetes.io/projected/457e511d-a1e8-453d-adfb-68177508f318-kube-api-access-29s6h\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.218210 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.219052 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.242963 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jpkjw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.244711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksnk9\" (UniqueName: \"kubernetes.io/projected/d79a41eb-b6b8-47c6-a14c-e2de4a932377-kube-api-access-ksnk9\") pod \"manila-operator-controller-manager-78b8f8fd84-jjq84\" (UID: \"d79a41eb-b6b8-47c6-a14c-e2de4a932377\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.244747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9lg\" (UniqueName: \"kubernetes.io/projected/71fcba69-40b1-4d11-912d-4c52b1a044fe-kube-api-access-fs9lg\") pod \"mariadb-operator-controller-manager-7b88bfc995-khxgm\" (UID: \"71fcba69-40b1-4d11-912d-4c52b1a044fe\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.244784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxr6z\" (UniqueName: \"kubernetes.io/projected/d0f751b3-5f7d-4756-b959-960ebca3eeaf-kube-api-access-cxr6z\") pod \"keystone-operator-controller-manager-78f8b7b89c-g9wnq\" (UID: \"d0f751b3-5f7d-4756-b959-960ebca3eeaf\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.244839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjkm2\" (UniqueName: \"kubernetes.io/projected/150467a4-4f48-49a7-9356-05b11babc187-kube-api-access-pjkm2\") pod \"ironic-operator-controller-manager-58865f87b4-v6n6c\" (UID: \"150467a4-4f48-49a7-9356-05b11babc187\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.253362 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.254571 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.268685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.277902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjkm2\" (UniqueName: \"kubernetes.io/projected/150467a4-4f48-49a7-9356-05b11babc187-kube-api-access-pjkm2\") pod \"ironic-operator-controller-manager-58865f87b4-v6n6c\" (UID: \"150467a4-4f48-49a7-9356-05b11babc187\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.323312 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxr6z\" (UniqueName: \"kubernetes.io/projected/d0f751b3-5f7d-4756-b959-960ebca3eeaf-kube-api-access-cxr6z\") pod \"keystone-operator-controller-manager-78f8b7b89c-g9wnq\" (UID: \"d0f751b3-5f7d-4756-b959-960ebca3eeaf\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.329277 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksnk9\" (UniqueName: \"kubernetes.io/projected/d79a41eb-b6b8-47c6-a14c-e2de4a932377-kube-api-access-ksnk9\") pod \"manila-operator-controller-manager-78b8f8fd84-jjq84\" (UID: \"d79a41eb-b6b8-47c6-a14c-e2de4a932377\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.329359 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.337102 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.348628 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9lg\" (UniqueName: \"kubernetes.io/projected/71fcba69-40b1-4d11-912d-4c52b1a044fe-kube-api-access-fs9lg\") pod \"mariadb-operator-controller-manager-7b88bfc995-khxgm\" (UID: \"71fcba69-40b1-4d11-912d-4c52b1a044fe\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.348818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbn6\" (UniqueName: \"kubernetes.io/projected/5ecbd421-8017-44e8-bcf4-4416cb6cd7ad-kube-api-access-lpbn6\") pod \"neutron-operator-controller-manager-569695f6c5-4bggc\" (UID: \"5ecbd421-8017-44e8-bcf4-4416cb6cd7ad\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.348925 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.350723 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.364549 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2wprv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.368087 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.394153 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.394773 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.395933 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.410278 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-btjt9" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.410780 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.426033 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9lg\" (UniqueName: \"kubernetes.io/projected/71fcba69-40b1-4d11-912d-4c52b1a044fe-kube-api-access-fs9lg\") pod \"mariadb-operator-controller-manager-7b88bfc995-khxgm\" (UID: \"71fcba69-40b1-4d11-912d-4c52b1a044fe\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.431356 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.432583 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.437954 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnhpv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.442044 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.457875 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcc7\" (UniqueName: \"kubernetes.io/projected/989c908a-4026-4ebd-9b57-0f9e2701b91a-kube-api-access-hxcc7\") pod \"nova-operator-controller-manager-74ffd97575-wvnwv\" (UID: \"989c908a-4026-4ebd-9b57-0f9e2701b91a\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.458043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pz42\" (UniqueName: \"kubernetes.io/projected/52d05040-2965-4f90-abc9-558c17a0e37d-kube-api-access-8pz42\") pod \"octavia-operator-controller-manager-7bf4858b78-wcmtg\" (UID: \"52d05040-2965-4f90-abc9-558c17a0e37d\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.458104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpbn6\" (UniqueName: \"kubernetes.io/projected/5ecbd421-8017-44e8-bcf4-4416cb6cd7ad-kube-api-access-lpbn6\") pod \"neutron-operator-controller-manager-569695f6c5-4bggc\" (UID: \"5ecbd421-8017-44e8-bcf4-4416cb6cd7ad\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.493501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.510429 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.511343 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.527204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpbn6\" (UniqueName: \"kubernetes.io/projected/5ecbd421-8017-44e8-bcf4-4416cb6cd7ad-kube-api-access-lpbn6\") pod \"neutron-operator-controller-manager-569695f6c5-4bggc\" (UID: \"5ecbd421-8017-44e8-bcf4-4416cb6cd7ad\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.527712 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xldb4" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.551323 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.558614 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.560471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.560532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr6rf\" (UniqueName: \"kubernetes.io/projected/c1774a8d-aed6-4be4-80c3-1182fb0456d3-kube-api-access-wr6rf\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.560582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pz42\" (UniqueName: \"kubernetes.io/projected/52d05040-2965-4f90-abc9-558c17a0e37d-kube-api-access-8pz42\") pod \"octavia-operator-controller-manager-7bf4858b78-wcmtg\" (UID: \"52d05040-2965-4f90-abc9-558c17a0e37d\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.560652 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcc7\" (UniqueName: \"kubernetes.io/projected/989c908a-4026-4ebd-9b57-0f9e2701b91a-kube-api-access-hxcc7\") pod \"nova-operator-controller-manager-74ffd97575-wvnwv\" (UID: \"989c908a-4026-4ebd-9b57-0f9e2701b91a\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.580257 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.613245 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.631687 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.642531 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pz42\" (UniqueName: \"kubernetes.io/projected/52d05040-2965-4f90-abc9-558c17a0e37d-kube-api-access-8pz42\") pod \"octavia-operator-controller-manager-7bf4858b78-wcmtg\" (UID: \"52d05040-2965-4f90-abc9-558c17a0e37d\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.648169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcc7\" (UniqueName: \"kubernetes.io/projected/989c908a-4026-4ebd-9b57-0f9e2701b91a-kube-api-access-hxcc7\") pod \"nova-operator-controller-manager-74ffd97575-wvnwv\" (UID: \"989c908a-4026-4ebd-9b57-0f9e2701b91a\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.662022 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr6rf\" (UniqueName: \"kubernetes.io/projected/c1774a8d-aed6-4be4-80c3-1182fb0456d3-kube-api-access-wr6rf\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.662093 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdkc7\" (UniqueName: \"kubernetes.io/projected/86412e8a-4c97-42a8-a3f8-dca6204f426a-kube-api-access-bdkc7\") pod \"ovn-operator-controller-manager-bf6d4f946-6vbp6\" (UID: \"86412e8a-4c97-42a8-a3f8-dca6204f426a\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.662150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.662182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.662290 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.662330 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:27.162316897 +0000 UTC m=+951.453411738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.662680 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: E0127 09:09:26.662705 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:27.662695757 +0000 UTC m=+951.953790598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.692676 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr6rf\" (UniqueName: \"kubernetes.io/projected/c1774a8d-aed6-4be4-80c3-1182fb0456d3-kube-api-access-wr6rf\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.702355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.718539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.719485 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.727018 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vgsqg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.728050 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.741220 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.742100 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.754683 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-l4tlw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.767097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdkc7\" (UniqueName: \"kubernetes.io/projected/86412e8a-4c97-42a8-a3f8-dca6204f426a-kube-api-access-bdkc7\") pod \"ovn-operator-controller-manager-bf6d4f946-6vbp6\" (UID: \"86412e8a-4c97-42a8-a3f8-dca6204f426a\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.780625 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.800047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdkc7\" (UniqueName: \"kubernetes.io/projected/86412e8a-4c97-42a8-a3f8-dca6204f426a-kube-api-access-bdkc7\") pod \"ovn-operator-controller-manager-bf6d4f946-6vbp6\" (UID: \"86412e8a-4c97-42a8-a3f8-dca6204f426a\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.820410 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.822005 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.826681 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-shhlw" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.843658 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.863831 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.868587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhc8\" (UniqueName: \"kubernetes.io/projected/e47ed97b-bb77-4d2a-899e-87c657f316d7-kube-api-access-5lhc8\") pod \"swift-operator-controller-manager-65596dbf77-hwbtq\" (UID: \"e47ed97b-bb77-4d2a-899e-87c657f316d7\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.868621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k828\" (UniqueName: \"kubernetes.io/projected/c8880a39-7486-454b-aa9f-0cd2b1148d60-kube-api-access-8k828\") pod \"placement-operator-controller-manager-7748d79f84-fc5hb\" (UID: \"c8880a39-7486-454b-aa9f-0cd2b1148d60\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.868647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6jp\" (UniqueName: \"kubernetes.io/projected/12271ff4-e21a-43f4-8995-9e9257e11067-kube-api-access-8r6jp\") pod \"telemetry-operator-controller-manager-7db57dc8bf-bv8sl\" (UID: \"12271ff4-e21a-43f4-8995-9e9257e11067\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.885328 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.892325 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.893922 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.897244 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f44ss" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.903478 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.904717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.907212 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-q9mjf" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.910661 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.931430 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.977110 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lhc8\" (UniqueName: \"kubernetes.io/projected/e47ed97b-bb77-4d2a-899e-87c657f316d7-kube-api-access-5lhc8\") pod \"swift-operator-controller-manager-65596dbf77-hwbtq\" (UID: \"e47ed97b-bb77-4d2a-899e-87c657f316d7\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.977165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k828\" (UniqueName: \"kubernetes.io/projected/c8880a39-7486-454b-aa9f-0cd2b1148d60-kube-api-access-8k828\") pod \"placement-operator-controller-manager-7748d79f84-fc5hb\" (UID: \"c8880a39-7486-454b-aa9f-0cd2b1148d60\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.977191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6jp\" (UniqueName: \"kubernetes.io/projected/12271ff4-e21a-43f4-8995-9e9257e11067-kube-api-access-8r6jp\") pod \"telemetry-operator-controller-manager-7db57dc8bf-bv8sl\" (UID: \"12271ff4-e21a-43f4-8995-9e9257e11067\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.977237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md49t\" (UniqueName: \"kubernetes.io/projected/e5c46366-d781-46fe-a7c1-d43fd82c4259-kube-api-access-md49t\") pod \"test-operator-controller-manager-6c866cfdcb-phnxc\" (UID: \"e5c46366-d781-46fe-a7c1-d43fd82c4259\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.977320 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgf24\" (UniqueName: \"kubernetes.io/projected/c59bd9e5-d547-4c13-ad44-36984b2c7b7e-kube-api-access-pgf24\") pod \"watcher-operator-controller-manager-6476466c7c-tc8kq\" (UID: \"c59bd9e5-d547-4c13-ad44-36984b2c7b7e\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.989225 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc"] Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.990708 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.992910 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zrxqn" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.993031 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 09:09:26 crc kubenswrapper[4985]: I0127 09:09:26.993049 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:26.999627 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.003609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lhc8\" (UniqueName: \"kubernetes.io/projected/e47ed97b-bb77-4d2a-899e-87c657f316d7-kube-api-access-5lhc8\") pod \"swift-operator-controller-manager-65596dbf77-hwbtq\" (UID: \"e47ed97b-bb77-4d2a-899e-87c657f316d7\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.019095 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6jp\" (UniqueName: \"kubernetes.io/projected/12271ff4-e21a-43f4-8995-9e9257e11067-kube-api-access-8r6jp\") pod \"telemetry-operator-controller-manager-7db57dc8bf-bv8sl\" (UID: \"12271ff4-e21a-43f4-8995-9e9257e11067\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.022740 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k828\" (UniqueName: \"kubernetes.io/projected/c8880a39-7486-454b-aa9f-0cd2b1148d60-kube-api-access-8k828\") pod \"placement-operator-controller-manager-7748d79f84-fc5hb\" (UID: \"c8880a39-7486-454b-aa9f-0cd2b1148d60\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.028594 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.029468 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.032418 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-p79pt" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.052869 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.055619 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.073542 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.082907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdg7\" (UniqueName: \"kubernetes.io/projected/73122c6c-2af8-4661-b823-4525cb1e675e-kube-api-access-7kdg7\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.082966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgf24\" (UniqueName: \"kubernetes.io/projected/c59bd9e5-d547-4c13-ad44-36984b2c7b7e-kube-api-access-pgf24\") pod \"watcher-operator-controller-manager-6476466c7c-tc8kq\" (UID: \"c59bd9e5-d547-4c13-ad44-36984b2c7b7e\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.083042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md49t\" (UniqueName: \"kubernetes.io/projected/e5c46366-d781-46fe-a7c1-d43fd82c4259-kube-api-access-md49t\") pod \"test-operator-controller-manager-6c866cfdcb-phnxc\" (UID: \"e5c46366-d781-46fe-a7c1-d43fd82c4259\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.083092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.083114 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.113765 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgf24\" (UniqueName: \"kubernetes.io/projected/c59bd9e5-d547-4c13-ad44-36984b2c7b7e-kube-api-access-pgf24\") pod \"watcher-operator-controller-manager-6476466c7c-tc8kq\" (UID: \"c59bd9e5-d547-4c13-ad44-36984b2c7b7e\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.116324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md49t\" (UniqueName: \"kubernetes.io/projected/e5c46366-d781-46fe-a7c1-d43fd82c4259-kube-api-access-md49t\") pod \"test-operator-controller-manager-6c866cfdcb-phnxc\" (UID: \"e5c46366-d781-46fe-a7c1-d43fd82c4259\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.157896 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.184669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.184743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.184775 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkq2\" (UniqueName: \"kubernetes.io/projected/ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a-kube-api-access-9bkq2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f8d82\" (UID: \"ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.184795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.184824 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdg7\" (UniqueName: \"kubernetes.io/projected/73122c6c-2af8-4661-b823-4525cb1e675e-kube-api-access-7kdg7\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185051 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185136 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:27.685112456 +0000 UTC m=+951.976207307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185198 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185225 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:28.185216428 +0000 UTC m=+952.476311279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185619 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.185657 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:27.68564592 +0000 UTC m=+951.976740761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.228974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdg7\" (UniqueName: \"kubernetes.io/projected/73122c6c-2af8-4661-b823-4525cb1e675e-kube-api-access-7kdg7\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.268974 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.288306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.289045 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bkq2\" (UniqueName: \"kubernetes.io/projected/ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a-kube-api-access-9bkq2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f8d82\" (UID: \"ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.308196 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.320623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bkq2\" (UniqueName: \"kubernetes.io/projected/ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a-kube-api-access-9bkq2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f8d82\" (UID: \"ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.356126 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.360602 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.626588 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" event={"ID":"9aed123f-6fb0-4c65-ac80-e926677d5ecc","Type":"ContainerStarted","Data":"844bca7357f65747448f522fa335c15f8dee235e18a3240f45755c11535f32d8"} Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.631682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" event={"ID":"451307f1-5d15-45d9-86c3-d45dc628d159","Type":"ContainerStarted","Data":"8dff7336e8a66e6dabfcb190c0555384a396306cf559f5d1da4eaff262cf68d1"} Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.655376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.668447 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84"] Jan 27 09:09:27 crc kubenswrapper[4985]: W0127 09:09:27.676145 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd79a41eb_b6b8_47c6_a14c_e2de4a932377.slice/crio-c07f6ed388a3abdd4ed069d557c1bf60f68f11a0c574aec9b7bf3ce085e7240e WatchSource:0}: Error finding container c07f6ed388a3abdd4ed069d557c1bf60f68f11a0c574aec9b7bf3ce085e7240e: Status 404 returned error can't find the container with id c07f6ed388a3abdd4ed069d557c1bf60f68f11a0c574aec9b7bf3ce085e7240e Jan 27 09:09:27 crc kubenswrapper[4985]: W0127 09:09:27.680705 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ecbd421_8017_44e8_bcf4_4416cb6cd7ad.slice/crio-581dffa599b55aa2d89125f04b18ece2ad71aa42438f19f6122d2b013dd52320 WatchSource:0}: Error finding container 581dffa599b55aa2d89125f04b18ece2ad71aa42438f19f6122d2b013dd52320: Status 404 returned error can't find the container with id 581dffa599b55aa2d89125f04b18ece2ad71aa42438f19f6122d2b013dd52320 Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.683368 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.690372 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.705357 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.705450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.705472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705712 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705782 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:29.705760594 +0000 UTC m=+953.996855435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705829 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705858 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:28.705848588 +0000 UTC m=+952.996943429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705886 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: E0127 09:09:27.705934 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:28.70592088 +0000 UTC m=+952.997015721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.710079 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2"] Jan 27 09:09:27 crc kubenswrapper[4985]: W0127 09:09:27.726483 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd51fc084_83b4_4f09_baa5_59842d67853e.slice/crio-eb7cd08e9d8415fa20e5edce47c3aa3ac8b3de7ee0aa2c7b66c3f67293e4a83d WatchSource:0}: Error finding container eb7cd08e9d8415fa20e5edce47c3aa3ac8b3de7ee0aa2c7b66c3f67293e4a83d: Status 404 returned error can't find the container with id eb7cd08e9d8415fa20e5edce47c3aa3ac8b3de7ee0aa2c7b66c3f67293e4a83d Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.730439 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.878145 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5"] Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.882887 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c"] Jan 27 09:09:27 crc kubenswrapper[4985]: W0127 09:09:27.889680 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a7eea8_cdc7_40d1_a558_2ba1606c646a.slice/crio-040d09a2c59479476ec695dd1b78cfa6d68c2e1305ff18d87559674bd31059d3 WatchSource:0}: Error finding container 040d09a2c59479476ec695dd1b78cfa6d68c2e1305ff18d87559674bd31059d3: Status 404 returned error can't find the container with id 040d09a2c59479476ec695dd1b78cfa6d68c2e1305ff18d87559674bd31059d3 Jan 27 09:09:27 crc kubenswrapper[4985]: I0127 09:09:27.981355 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.000166 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl"] Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.001998 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/telemetry-operator@sha256:578ea6a6c68040cb54e0160462dc2b97226594621a5f441fa1d58f429cf0e010,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8r6jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7db57dc8bf-bv8sl_openstack-operators(12271ff4-e21a-43f4-8995-9e9257e11067): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.003574 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" podUID="12271ff4-e21a-43f4-8995-9e9257e11067" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.009773 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hxcc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74ffd97575-wvnwv_openstack-operators(989c908a-4026-4ebd-9b57-0f9e2701b91a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.011110 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" podUID="989c908a-4026-4ebd-9b57-0f9e2701b91a" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.011419 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm"] Jan 27 09:09:28 crc kubenswrapper[4985]: W0127 09:09:28.012680 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47ed97b_bb77_4d2a_899e_87c657f316d7.slice/crio-aafa8933ce21787518930a437c770e6428164804f2f7899ce31353d4e62f6ebe WatchSource:0}: Error finding container aafa8933ce21787518930a437c770e6428164804f2f7899ce31353d4e62f6ebe: Status 404 returned error can't find the container with id aafa8933ce21787518930a437c770e6428164804f2f7899ce31353d4e62f6ebe Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.015475 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-65596dbf77-hwbtq_openstack-operators(e47ed97b-bb77-4d2a-899e-87c657f316d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.016721 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" podUID="e47ed97b-bb77-4d2a-899e-87c657f316d7" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.017290 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.031656 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.038149 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.208064 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.213037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.213244 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.213395 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:30.213379496 +0000 UTC m=+954.504474337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.217646 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc"] Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.223273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb"] Jan 27 09:09:28 crc kubenswrapper[4985]: W0127 09:09:28.230321 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8880a39_7486_454b_aa9f_0cd2b1148d60.slice/crio-117eca1c0973e77b02423858df756fca0ef195705a7bb12318c9642cc238e22f WatchSource:0}: Error finding container 117eca1c0973e77b02423858df756fca0ef195705a7bb12318c9642cc238e22f: Status 404 returned error can't find the container with id 117eca1c0973e77b02423858df756fca0ef195705a7bb12318c9642cc238e22f Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.231160 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82"] Jan 27 09:09:28 crc kubenswrapper[4985]: W0127 09:09:28.234037 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59bd9e5_d547_4c13_ad44_36984b2c7b7e.slice/crio-e421d6a6286e618fea2efedcd7be94307294d545386eb2de46cc12a0ef02b9c3 WatchSource:0}: Error finding container e421d6a6286e618fea2efedcd7be94307294d545386eb2de46cc12a0ef02b9c3: Status 404 returned error can't find the container with id e421d6a6286e618fea2efedcd7be94307294d545386eb2de46cc12a0ef02b9c3 Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.238016 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgf24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6476466c7c-tc8kq_openstack-operators(c59bd9e5-d547-4c13-ad44-36984b2c7b7e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.240080 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" podUID="c59bd9e5-d547-4c13-ad44-36984b2c7b7e" Jan 27 09:09:28 crc kubenswrapper[4985]: W0127 09:09:28.243358 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc2f9b8_c7e5_4836_805f_e3cb7ef1ca2a.slice/crio-038579780d2d40a54c5c22af18dea005c9b8116723031e8761c51e6556f69933 WatchSource:0}: Error finding container 038579780d2d40a54c5c22af18dea005c9b8116723031e8761c51e6556f69933: Status 404 returned error can't find the container with id 038579780d2d40a54c5c22af18dea005c9b8116723031e8761c51e6556f69933 Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.254061 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md49t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-phnxc_openstack-operators(e5c46366-d781-46fe-a7c1-d43fd82c4259): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.257144 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" podUID="e5c46366-d781-46fe-a7c1-d43fd82c4259" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.262897 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bkq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-f8d82_openstack-operators(ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.265140 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" podUID="ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.640673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" event={"ID":"150467a4-4f48-49a7-9356-05b11babc187","Type":"ContainerStarted","Data":"27823da444f81a90e45d9793a16c2092c0b6c3503c1b84eb91c198e2e0b69656"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.642223 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" event={"ID":"d51fc084-83b4-4f09-baa5-59842d67853e","Type":"ContainerStarted","Data":"eb7cd08e9d8415fa20e5edce47c3aa3ac8b3de7ee0aa2c7b66c3f67293e4a83d"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.645274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" event={"ID":"989c908a-4026-4ebd-9b57-0f9e2701b91a","Type":"ContainerStarted","Data":"3500f27aee9579e040070c3084b537a2823ca47e2523cd265a86e534fd0c5a43"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.647948 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" podUID="989c908a-4026-4ebd-9b57-0f9e2701b91a" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.648359 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" event={"ID":"d79a41eb-b6b8-47c6-a14c-e2de4a932377","Type":"ContainerStarted","Data":"c07f6ed388a3abdd4ed069d557c1bf60f68f11a0c574aec9b7bf3ce085e7240e"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.651907 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" event={"ID":"c59bd9e5-d547-4c13-ad44-36984b2c7b7e","Type":"ContainerStarted","Data":"e421d6a6286e618fea2efedcd7be94307294d545386eb2de46cc12a0ef02b9c3"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.653555 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" podUID="c59bd9e5-d547-4c13-ad44-36984b2c7b7e" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.654384 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" event={"ID":"ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a","Type":"ContainerStarted","Data":"038579780d2d40a54c5c22af18dea005c9b8116723031e8761c51e6556f69933"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.655722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" event={"ID":"71fcba69-40b1-4d11-912d-4c52b1a044fe","Type":"ContainerStarted","Data":"fc8f7844acb4c35d21811eaf46e8b37a0d5e71dfeb26f715fe6df1760896510d"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.656366 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" podUID="ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.656925 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" event={"ID":"12271ff4-e21a-43f4-8995-9e9257e11067","Type":"ContainerStarted","Data":"ea3f927b33afc1aa4cfd47d426c945623bcbf871c5d005c78d6ac45ca5ad5958"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.658180 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" event={"ID":"d0f751b3-5f7d-4756-b959-960ebca3eeaf","Type":"ContainerStarted","Data":"00012d27c102c5da29dfde43e7320e4a8a2c4d8464436b94989f62837f0c7b81"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.658522 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/telemetry-operator@sha256:578ea6a6c68040cb54e0160462dc2b97226594621a5f441fa1d58f429cf0e010\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" podUID="12271ff4-e21a-43f4-8995-9e9257e11067" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.659292 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" event={"ID":"e47ed97b-bb77-4d2a-899e-87c657f316d7","Type":"ContainerStarted","Data":"aafa8933ce21787518930a437c770e6428164804f2f7899ce31353d4e62f6ebe"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.660305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" event={"ID":"5ecbd421-8017-44e8-bcf4-4416cb6cd7ad","Type":"ContainerStarted","Data":"581dffa599b55aa2d89125f04b18ece2ad71aa42438f19f6122d2b013dd52320"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.661203 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29\\\"\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" podUID="e47ed97b-bb77-4d2a-899e-87c657f316d7" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.667871 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" event={"ID":"62b33436-aec3-4e07-a880-cefb55ec47be","Type":"ContainerStarted","Data":"9cff3239642cfbcf6027ac79ada7a29e2aac5b8865942730733ff89c5ab8260a"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.669906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" event={"ID":"c9b11814-36a8-4736-b144-358d8f2c7268","Type":"ContainerStarted","Data":"9f25555462d20e5fd967ce876c9b23ec215f8105eb332f5f698a0ca06b002447"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.674733 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" event={"ID":"e5c46366-d781-46fe-a7c1-d43fd82c4259","Type":"ContainerStarted","Data":"62651ff05669e81934eb1e183a367a8b4d777181bd6a56d31b12805361334335"} Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.684345 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" podUID="e5c46366-d781-46fe-a7c1-d43fd82c4259" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.696870 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" event={"ID":"52d05040-2965-4f90-abc9-558c17a0e37d","Type":"ContainerStarted","Data":"129cd30b1a35b6bf9aabe3b8a67561b8bb223a5e24fd7e608035b148ac716b0a"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.704209 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" event={"ID":"f3a7eea8-cdc7-40d1-a558-2ba1606c646a","Type":"ContainerStarted","Data":"040d09a2c59479476ec695dd1b78cfa6d68c2e1305ff18d87559674bd31059d3"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.709099 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" event={"ID":"c8880a39-7486-454b-aa9f-0cd2b1148d60","Type":"ContainerStarted","Data":"117eca1c0973e77b02423858df756fca0ef195705a7bb12318c9642cc238e22f"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.711329 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" event={"ID":"86412e8a-4c97-42a8-a3f8-dca6204f426a","Type":"ContainerStarted","Data":"25cc1c0df2676256094c66a1becfa249718152605aa3b68178b0c84a816d5c63"} Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.720482 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:28 crc kubenswrapper[4985]: I0127 09:09:28.720612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.722106 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.722173 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:30.722142658 +0000 UTC m=+955.013237499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.722738 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:28 crc kubenswrapper[4985]: E0127 09:09:28.722826 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:30.722807337 +0000 UTC m=+955.013902248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.723598 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/nova-operator@sha256:9c0272b9043057e7fd740843e11c951ce93d5169298ed91aa8a60a702649f7cf\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" podUID="989c908a-4026-4ebd-9b57-0f9e2701b91a" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.724015 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29\\\"\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" podUID="e47ed97b-bb77-4d2a-899e-87c657f316d7" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.730803 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/telemetry-operator@sha256:578ea6a6c68040cb54e0160462dc2b97226594621a5f441fa1d58f429cf0e010\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" podUID="12271ff4-e21a-43f4-8995-9e9257e11067" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.730832 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" podUID="ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.730847 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" podUID="e5c46366-d781-46fe-a7c1-d43fd82c4259" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.730883 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" podUID="c59bd9e5-d547-4c13-ad44-36984b2c7b7e" Jan 27 09:09:29 crc kubenswrapper[4985]: I0127 09:09:29.737747 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.738141 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:29 crc kubenswrapper[4985]: E0127 09:09:29.738200 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:33.738182753 +0000 UTC m=+958.029277604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: I0127 09:09:30.250629 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.250783 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.250857 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:34.250839793 +0000 UTC m=+958.541934634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: I0127 09:09:30.759190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:30 crc kubenswrapper[4985]: I0127 09:09:30.759262 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.759368 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.759435 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:34.7594163 +0000 UTC m=+959.050511141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.759442 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:30 crc kubenswrapper[4985]: E0127 09:09:30.759504 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:34.759486532 +0000 UTC m=+959.050581383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:33 crc kubenswrapper[4985]: I0127 09:09:33.803599 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:33 crc kubenswrapper[4985]: E0127 09:09:33.803750 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:33 crc kubenswrapper[4985]: E0127 09:09:33.804061 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:41.804043057 +0000 UTC m=+966.095137898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: I0127 09:09:34.311338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.311616 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.311755 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:42.31172083 +0000 UTC m=+966.602815671 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: I0127 09:09:34.818349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:34 crc kubenswrapper[4985]: I0127 09:09:34.818420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.818639 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.819556 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:42.819530487 +0000 UTC m=+967.110625358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.818694 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:34 crc kubenswrapper[4985]: E0127 09:09:34.819636 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:42.819618879 +0000 UTC m=+967.110713720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:39 crc kubenswrapper[4985]: E0127 09:09:39.666542 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/cinder-operator@sha256:4432c6643faeccbbd949b4ba54d7bc7efbe39a255e57300af67b51a2b03eb5e8" Jan 27 09:09:39 crc kubenswrapper[4985]: E0127 09:09:39.667547 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/cinder-operator@sha256:4432c6643faeccbbd949b4ba54d7bc7efbe39a255e57300af67b51a2b03eb5e8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k86rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5fdc687f5-gbl76_openstack-operators(62b33436-aec3-4e07-a880-cefb55ec47be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:09:39 crc kubenswrapper[4985]: E0127 09:09:39.668649 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" podUID="62b33436-aec3-4e07-a880-cefb55ec47be" Jan 27 09:09:39 crc kubenswrapper[4985]: E0127 09:09:39.793617 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/cinder-operator@sha256:4432c6643faeccbbd949b4ba54d7bc7efbe39a255e57300af67b51a2b03eb5e8\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" podUID="62b33436-aec3-4e07-a880-cefb55ec47be" Jan 27 09:09:40 crc kubenswrapper[4985]: E0127 09:09:40.253353 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6" Jan 27 09:09:40 crc kubenswrapper[4985]: E0127 09:09:40.253944 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxr6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-78f8b7b89c-g9wnq_openstack-operators(d0f751b3-5f7d-4756-b959-960ebca3eeaf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:09:40 crc kubenswrapper[4985]: E0127 09:09:40.255478 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" podUID="d0f751b3-5f7d-4756-b959-960ebca3eeaf" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.800267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" event={"ID":"451307f1-5d15-45d9-86c3-d45dc628d159","Type":"ContainerStarted","Data":"72d1dcaff7878cf7d4dd44d4a2d7e77de587843610b238e8edca63403a0c4978"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.801056 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.802541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" event={"ID":"150467a4-4f48-49a7-9356-05b11babc187","Type":"ContainerStarted","Data":"c5852858a27021fcec4c7f1030a3e446860b58fc83c6696a8cddecdab69f907f"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.802576 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.804152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" event={"ID":"9aed123f-6fb0-4c65-ac80-e926677d5ecc","Type":"ContainerStarted","Data":"915ffcf8d8c1840c1b57d484c57f115e5e3aba85b62b2f9909416de4126e8b9f"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.804475 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.806009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" event={"ID":"5ecbd421-8017-44e8-bcf4-4416cb6cd7ad","Type":"ContainerStarted","Data":"e47d77c1b8c420dd0fdd50264c41186b633995bab1827c157167e44980f71450"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.806343 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.807764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" event={"ID":"c8880a39-7486-454b-aa9f-0cd2b1148d60","Type":"ContainerStarted","Data":"e8fafc678658cb154aab82296e3742313e7f921a8c82e4363573825f63279370"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.807869 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.809215 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" event={"ID":"86412e8a-4c97-42a8-a3f8-dca6204f426a","Type":"ContainerStarted","Data":"52bd1dbb38472d7f9d640d6f49950068acae34d66f125fc70e67c7df0d1ea3a1"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.809261 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.810762 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" event={"ID":"c9b11814-36a8-4736-b144-358d8f2c7268","Type":"ContainerStarted","Data":"4d74961dfeed874b29b7f925df3f2fa17c8a8ce044d7a93349442f630a28830f"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.828245 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" event={"ID":"52d05040-2965-4f90-abc9-558c17a0e37d","Type":"ContainerStarted","Data":"2339a3d8a67ebc4c93fbadbecd1f47b2ffc3d26fc636295cbe13db8b58c5dd21"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.828704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.839318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" event={"ID":"d51fc084-83b4-4f09-baa5-59842d67853e","Type":"ContainerStarted","Data":"e5181e9e57d6871d432839a870b5943f3a7c0e16aece7ee66f38bf801cf368a6"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.840672 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.848984 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" event={"ID":"f3a7eea8-cdc7-40d1-a558-2ba1606c646a","Type":"ContainerStarted","Data":"77bcee17155886d5a30ff6faef9103f1bc561c8cf5f7cd7f9238a3b801b1ba3b"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.850830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.867585 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" event={"ID":"71fcba69-40b1-4d11-912d-4c52b1a044fe","Type":"ContainerStarted","Data":"1a3a668f2a70ab92ee057472ba94e1628c8f6f935b8fba64d46bd040f63aea79"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.868041 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.871664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" event={"ID":"d79a41eb-b6b8-47c6-a14c-e2de4a932377","Type":"ContainerStarted","Data":"933b73086530c6b86f743c26080b2ce5c53aa24249be17af174684d761ce7daf"} Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.871951 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:40 crc kubenswrapper[4985]: E0127 09:09:40.873004 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" podUID="d0f751b3-5f7d-4756-b959-960ebca3eeaf" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.886268 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" podStartSLOduration=3.069296844 podStartE2EDuration="15.88623329s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.434646909 +0000 UTC m=+951.725741750" lastFinishedPulling="2026-01-27 09:09:40.251583355 +0000 UTC m=+964.542678196" observedRunningTime="2026-01-27 09:09:40.848754829 +0000 UTC m=+965.139849670" watchObservedRunningTime="2026-01-27 09:09:40.88623329 +0000 UTC m=+965.177328151" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.921165 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" podStartSLOduration=3.5728094009999998 podStartE2EDuration="15.92114289s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.895573655 +0000 UTC m=+952.186668496" lastFinishedPulling="2026-01-27 09:09:40.243907144 +0000 UTC m=+964.535001985" observedRunningTime="2026-01-27 09:09:40.885854579 +0000 UTC m=+965.176949420" watchObservedRunningTime="2026-01-27 09:09:40.92114289 +0000 UTC m=+965.212237731" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.925835 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" podStartSLOduration=2.615333851 podStartE2EDuration="14.925809879s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.988408298 +0000 UTC m=+952.279503129" lastFinishedPulling="2026-01-27 09:09:40.298884316 +0000 UTC m=+964.589979157" observedRunningTime="2026-01-27 09:09:40.919528385 +0000 UTC m=+965.210623246" watchObservedRunningTime="2026-01-27 09:09:40.925809879 +0000 UTC m=+965.216904720" Jan 27 09:09:40 crc kubenswrapper[4985]: I0127 09:09:40.963815 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" podStartSLOduration=3.412103151 podStartE2EDuration="15.963786223s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.699970465 +0000 UTC m=+951.991065316" lastFinishedPulling="2026-01-27 09:09:40.251653547 +0000 UTC m=+964.542748388" observedRunningTime="2026-01-27 09:09:40.9553193 +0000 UTC m=+965.246414141" watchObservedRunningTime="2026-01-27 09:09:40.963786223 +0000 UTC m=+965.254881064" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.061432 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" podStartSLOduration=3.194193858 podStartE2EDuration="16.061403218s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.376504289 +0000 UTC m=+951.667599130" lastFinishedPulling="2026-01-27 09:09:40.243713649 +0000 UTC m=+964.534808490" observedRunningTime="2026-01-27 09:09:41.060125613 +0000 UTC m=+965.351220454" watchObservedRunningTime="2026-01-27 09:09:41.061403218 +0000 UTC m=+965.352498059" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.064756 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" podStartSLOduration=3.5327119590000002 podStartE2EDuration="16.064741809s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.735060371 +0000 UTC m=+952.026155212" lastFinishedPulling="2026-01-27 09:09:40.267090221 +0000 UTC m=+964.558185062" observedRunningTime="2026-01-27 09:09:41.032299918 +0000 UTC m=+965.323394759" watchObservedRunningTime="2026-01-27 09:09:41.064741809 +0000 UTC m=+965.355836650" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.140175 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" podStartSLOduration=2.588784681 podStartE2EDuration="15.140143473s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.69249067 +0000 UTC m=+951.983585511" lastFinishedPulling="2026-01-27 09:09:40.243849462 +0000 UTC m=+964.534944303" observedRunningTime="2026-01-27 09:09:41.132645477 +0000 UTC m=+965.423740328" watchObservedRunningTime="2026-01-27 09:09:41.140143473 +0000 UTC m=+965.431238314" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.159351 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" podStartSLOduration=2.890219461 podStartE2EDuration="15.159326561s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.988613334 +0000 UTC m=+952.279708175" lastFinishedPulling="2026-01-27 09:09:40.257720434 +0000 UTC m=+964.548815275" observedRunningTime="2026-01-27 09:09:41.156718049 +0000 UTC m=+965.447812890" watchObservedRunningTime="2026-01-27 09:09:41.159326561 +0000 UTC m=+965.450421402" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.195644 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" podStartSLOduration=3.121234265 podStartE2EDuration="15.195627299s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.233218702 +0000 UTC m=+952.524313563" lastFinishedPulling="2026-01-27 09:09:40.307611756 +0000 UTC m=+964.598706597" observedRunningTime="2026-01-27 09:09:41.193996874 +0000 UTC m=+965.485091725" watchObservedRunningTime="2026-01-27 09:09:41.195627299 +0000 UTC m=+965.486722140" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.223763 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" podStartSLOduration=3.872970607 podStartE2EDuration="16.223737133s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.895448501 +0000 UTC m=+952.186543342" lastFinishedPulling="2026-01-27 09:09:40.246215027 +0000 UTC m=+964.537309868" observedRunningTime="2026-01-27 09:09:41.220431722 +0000 UTC m=+965.511526563" watchObservedRunningTime="2026-01-27 09:09:41.223737133 +0000 UTC m=+965.514831974" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.241301 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" podStartSLOduration=2.980064142 podStartE2EDuration="15.241272235s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.990333221 +0000 UTC m=+952.281428062" lastFinishedPulling="2026-01-27 09:09:40.251541314 +0000 UTC m=+964.542636155" observedRunningTime="2026-01-27 09:09:41.237722277 +0000 UTC m=+965.528817118" watchObservedRunningTime="2026-01-27 09:09:41.241272235 +0000 UTC m=+965.532367076" Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.806304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:41 crc kubenswrapper[4985]: E0127 09:09:41.806442 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:41 crc kubenswrapper[4985]: E0127 09:09:41.807451 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert podName:457e511d-a1e8-453d-adfb-68177508f318 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:57.807433876 +0000 UTC m=+982.098528717 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert") pod "infra-operator-controller-manager-54ccf4f85d-54sgw" (UID: "457e511d-a1e8-453d-adfb-68177508f318") : secret "infra-operator-webhook-server-cert" not found Jan 27 09:09:41 crc kubenswrapper[4985]: I0127 09:09:41.879581 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:42 crc kubenswrapper[4985]: I0127 09:09:42.316338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.316576 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.316661 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert podName:c1774a8d-aed6-4be4-80c3-1182fb0456d3 nodeName:}" failed. No retries permitted until 2026-01-27 09:09:58.316641461 +0000 UTC m=+982.607736302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" (UID: "c1774a8d-aed6-4be4-80c3-1182fb0456d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 09:09:42 crc kubenswrapper[4985]: I0127 09:09:42.822911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:42 crc kubenswrapper[4985]: I0127 09:09:42.822973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.823135 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.823200 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:58.823180392 +0000 UTC m=+983.114275233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "metrics-server-cert" not found Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.823197 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 09:09:42 crc kubenswrapper[4985]: E0127 09:09:42.823298 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs podName:73122c6c-2af8-4661-b823-4525cb1e675e nodeName:}" failed. No retries permitted until 2026-01-27 09:09:58.823277035 +0000 UTC m=+983.114371926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-lxlgc" (UID: "73122c6c-2af8-4661-b823-4525cb1e675e") : secret "webhook-server-cert" not found Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.108469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-xhnxv" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.120991 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" podStartSLOduration=8.547123382 podStartE2EDuration="21.120975573s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.69322737 +0000 UTC m=+951.984322211" lastFinishedPulling="2026-01-27 09:09:40.267079561 +0000 UTC m=+964.558174402" observedRunningTime="2026-01-27 09:09:41.338294533 +0000 UTC m=+965.629389374" watchObservedRunningTime="2026-01-27 09:09:46.120975573 +0000 UTC m=+970.412070414" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.186443 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-7r9cn" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.256607 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-lgpgn" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.278159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-v7nj2" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.342545 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-v6n6c" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.397557 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-jjq84" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.496856 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-tgqr5" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.556710 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-khxgm" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.584427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-4bggc" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.731704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-wcmtg" Jan 27 09:09:46 crc kubenswrapper[4985]: I0127 09:09:46.870013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-6vbp6" Jan 27 09:09:47 crc kubenswrapper[4985]: I0127 09:09:47.056383 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-fc5hb" Jan 27 09:09:49 crc kubenswrapper[4985]: I0127 09:09:49.935295 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" event={"ID":"c59bd9e5-d547-4c13-ad44-36984b2c7b7e","Type":"ContainerStarted","Data":"97e1007ca3e7045bf0ecea6d50149d87bc99866b12401b332dd802be91524d98"} Jan 27 09:09:49 crc kubenswrapper[4985]: I0127 09:09:49.936068 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:49 crc kubenswrapper[4985]: I0127 09:09:49.949947 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" podStartSLOduration=2.778180726 podStartE2EDuration="23.949931325s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.237877689 +0000 UTC m=+952.528972540" lastFinishedPulling="2026-01-27 09:09:49.409628298 +0000 UTC m=+973.700723139" observedRunningTime="2026-01-27 09:09:49.949805862 +0000 UTC m=+974.240900723" watchObservedRunningTime="2026-01-27 09:09:49.949931325 +0000 UTC m=+974.241026166" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.965010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" event={"ID":"62b33436-aec3-4e07-a880-cefb55ec47be","Type":"ContainerStarted","Data":"5a0729fcc697b413e8239a2c4448824e9217fd8fdaec1650cfb69159c7da0ace"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.965635 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.966550 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" event={"ID":"ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a","Type":"ContainerStarted","Data":"5e97709895bac20d170d992860007ab695755c766040a9c3a2417f393e75f7f4"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.968582 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" event={"ID":"12271ff4-e21a-43f4-8995-9e9257e11067","Type":"ContainerStarted","Data":"d67b88988565ca47bd78e1bd8f03972b16863d7f90836042a5480a06ade82dd4"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.968832 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.972134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" event={"ID":"e5c46366-d781-46fe-a7c1-d43fd82c4259","Type":"ContainerStarted","Data":"b278d9e14d643b1db73dfa55d74994724e2796e8113caf78867da26d118672b9"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.972338 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.973858 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" event={"ID":"e47ed97b-bb77-4d2a-899e-87c657f316d7","Type":"ContainerStarted","Data":"d98f4616f9d3b0fab301afc4f930f6c8c28f89c2728b625403671139fd690200"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.974046 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.977182 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" event={"ID":"989c908a-4026-4ebd-9b57-0f9e2701b91a","Type":"ContainerStarted","Data":"1682ef849a66e49ab7cbfebbde4db2c40c04641eb1f6eff1f555402fc2ccb1d5"} Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.977315 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.984125 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" podStartSLOduration=3.488394125 podStartE2EDuration="28.984109624s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.71905035 +0000 UTC m=+952.010145201" lastFinishedPulling="2026-01-27 09:09:53.214765859 +0000 UTC m=+977.505860700" observedRunningTime="2026-01-27 09:09:53.9792413 +0000 UTC m=+978.270336141" watchObservedRunningTime="2026-01-27 09:09:53.984109624 +0000 UTC m=+978.275204465" Jan 27 09:09:53 crc kubenswrapper[4985]: I0127 09:09:53.998285 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" podStartSLOduration=2.797310459 podStartE2EDuration="27.998265362s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.009684664 +0000 UTC m=+952.300779505" lastFinishedPulling="2026-01-27 09:09:53.210639567 +0000 UTC m=+977.501734408" observedRunningTime="2026-01-27 09:09:53.995264639 +0000 UTC m=+978.286359480" watchObservedRunningTime="2026-01-27 09:09:53.998265362 +0000 UTC m=+978.289360203" Jan 27 09:09:54 crc kubenswrapper[4985]: I0127 09:09:54.015044 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f8d82" podStartSLOduration=2.996311897 podStartE2EDuration="28.015029251s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.262714292 +0000 UTC m=+952.553809133" lastFinishedPulling="2026-01-27 09:09:53.281431646 +0000 UTC m=+977.572526487" observedRunningTime="2026-01-27 09:09:54.012716437 +0000 UTC m=+978.303811278" watchObservedRunningTime="2026-01-27 09:09:54.015029251 +0000 UTC m=+978.306124092" Jan 27 09:09:54 crc kubenswrapper[4985]: I0127 09:09:54.031533 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" podStartSLOduration=2.830122219 podStartE2EDuration="28.031501703s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.01536289 +0000 UTC m=+952.306457721" lastFinishedPulling="2026-01-27 09:09:53.216742374 +0000 UTC m=+977.507837205" observedRunningTime="2026-01-27 09:09:54.029869828 +0000 UTC m=+978.320964669" watchObservedRunningTime="2026-01-27 09:09:54.031501703 +0000 UTC m=+978.322596544" Jan 27 09:09:54 crc kubenswrapper[4985]: I0127 09:09:54.046713 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" podStartSLOduration=3.089104949 podStartE2EDuration="28.046691458s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.253034687 +0000 UTC m=+952.544129548" lastFinishedPulling="2026-01-27 09:09:53.210621216 +0000 UTC m=+977.501716057" observedRunningTime="2026-01-27 09:09:54.045662111 +0000 UTC m=+978.336756972" watchObservedRunningTime="2026-01-27 09:09:54.046691458 +0000 UTC m=+978.337786299" Jan 27 09:09:54 crc kubenswrapper[4985]: I0127 09:09:54.064180 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" podStartSLOduration=2.82653309 podStartE2EDuration="28.064156078s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:28.001360165 +0000 UTC m=+952.292455006" lastFinishedPulling="2026-01-27 09:09:53.238983153 +0000 UTC m=+977.530077994" observedRunningTime="2026-01-27 09:09:54.058940415 +0000 UTC m=+978.350035256" watchObservedRunningTime="2026-01-27 09:09:54.064156078 +0000 UTC m=+978.355250919" Jan 27 09:09:54 crc kubenswrapper[4985]: I0127 09:09:54.984631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" event={"ID":"d0f751b3-5f7d-4756-b959-960ebca3eeaf","Type":"ContainerStarted","Data":"06f9f820d9c9eb4ba8477432b0d02fb0308aaedbe78d81682d8f2b226943e27d"} Jan 27 09:09:55 crc kubenswrapper[4985]: I0127 09:09:55.023133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" podStartSLOduration=3.848844078 podStartE2EDuration="30.023113538s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:27.670271569 +0000 UTC m=+951.961366410" lastFinishedPulling="2026-01-27 09:09:53.844541029 +0000 UTC m=+978.135635870" observedRunningTime="2026-01-27 09:09:55.021667278 +0000 UTC m=+979.312762119" watchObservedRunningTime="2026-01-27 09:09:55.023113538 +0000 UTC m=+979.314208369" Jan 27 09:09:56 crc kubenswrapper[4985]: I0127 09:09:56.614769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:09:57 crc kubenswrapper[4985]: I0127 09:09:57.312853 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-tc8kq" Jan 27 09:09:57 crc kubenswrapper[4985]: I0127 09:09:57.886197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:57 crc kubenswrapper[4985]: I0127 09:09:57.892448 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/457e511d-a1e8-453d-adfb-68177508f318-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-54sgw\" (UID: \"457e511d-a1e8-453d-adfb-68177508f318\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.071972 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ds7lv" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.079901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.393019 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.400139 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1774a8d-aed6-4be4-80c3-1182fb0456d3-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf\" (UID: \"c1774a8d-aed6-4be4-80c3-1182fb0456d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:58 crc kubenswrapper[4985]: W0127 09:09:58.502213 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457e511d_a1e8_453d_adfb_68177508f318.slice/crio-1c754eba71f96b9dbee090218a95d980d4bb3291e6736dff5bc07a24147c23c1 WatchSource:0}: Error finding container 1c754eba71f96b9dbee090218a95d980d4bb3291e6736dff5bc07a24147c23c1: Status 404 returned error can't find the container with id 1c754eba71f96b9dbee090218a95d980d4bb3291e6736dff5bc07a24147c23c1 Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.502426 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw"] Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.596060 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnhpv" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.605165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.882289 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf"] Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.901920 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.901986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.905566 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:58 crc kubenswrapper[4985]: I0127 09:09:58.905610 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73122c6c-2af8-4661-b823-4525cb1e675e-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-lxlgc\" (UID: \"73122c6c-2af8-4661-b823-4525cb1e675e\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:59 crc kubenswrapper[4985]: I0127 09:09:59.007932 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" event={"ID":"457e511d-a1e8-453d-adfb-68177508f318","Type":"ContainerStarted","Data":"1c754eba71f96b9dbee090218a95d980d4bb3291e6736dff5bc07a24147c23c1"} Jan 27 09:09:59 crc kubenswrapper[4985]: I0127 09:09:59.009071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" event={"ID":"c1774a8d-aed6-4be4-80c3-1182fb0456d3","Type":"ContainerStarted","Data":"4c26e64f55555776a4ce52cff1f2cde02c2c510af3446e4d663a7a2ca177a1f3"} Jan 27 09:09:59 crc kubenswrapper[4985]: I0127 09:09:59.152306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zrxqn" Jan 27 09:09:59 crc kubenswrapper[4985]: I0127 09:09:59.161398 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:09:59 crc kubenswrapper[4985]: I0127 09:09:59.389288 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc"] Jan 27 09:09:59 crc kubenswrapper[4985]: W0127 09:09:59.400666 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73122c6c_2af8_4661_b823_4525cb1e675e.slice/crio-e3c254b3e1ea70ec0d7555e3d1a55c7a4eda2e6092c76bed407940595fdba038 WatchSource:0}: Error finding container e3c254b3e1ea70ec0d7555e3d1a55c7a4eda2e6092c76bed407940595fdba038: Status 404 returned error can't find the container with id e3c254b3e1ea70ec0d7555e3d1a55c7a4eda2e6092c76bed407940595fdba038 Jan 27 09:10:00 crc kubenswrapper[4985]: I0127 09:10:00.017052 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" event={"ID":"73122c6c-2af8-4661-b823-4525cb1e675e","Type":"ContainerStarted","Data":"b0be20896054bf1ebccaa64b76a1ed77179dbb184b296cbd9c73432d88a624bf"} Jan 27 09:10:00 crc kubenswrapper[4985]: I0127 09:10:00.017416 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" event={"ID":"73122c6c-2af8-4661-b823-4525cb1e675e","Type":"ContainerStarted","Data":"e3c254b3e1ea70ec0d7555e3d1a55c7a4eda2e6092c76bed407940595fdba038"} Jan 27 09:10:00 crc kubenswrapper[4985]: I0127 09:10:00.017436 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:10:00 crc kubenswrapper[4985]: I0127 09:10:00.052533 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" podStartSLOduration=34.052496581 podStartE2EDuration="34.052496581s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:10:00.041810888 +0000 UTC m=+984.332905739" watchObservedRunningTime="2026-01-27 09:10:00.052496581 +0000 UTC m=+984.343591422" Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.032561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" event={"ID":"c1774a8d-aed6-4be4-80c3-1182fb0456d3","Type":"ContainerStarted","Data":"f1be8f3259d624c14a6b37f9d7c2eefabbb030bd1ea737b224815dbe27e3e0f7"} Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.033059 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.034193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" event={"ID":"457e511d-a1e8-453d-adfb-68177508f318","Type":"ContainerStarted","Data":"eaa735a7fde57aa311f0dd06ce4e50875a22ff7d103d3d4cf88bb6dd4992ce03"} Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.034384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.068834 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" podStartSLOduration=34.039507315 podStartE2EDuration="36.068818629s" podCreationTimestamp="2026-01-27 09:09:26 +0000 UTC" firstStartedPulling="2026-01-27 09:09:58.855595149 +0000 UTC m=+983.146690010" lastFinishedPulling="2026-01-27 09:10:00.884906483 +0000 UTC m=+985.176001324" observedRunningTime="2026-01-27 09:10:02.067271896 +0000 UTC m=+986.358366777" watchObservedRunningTime="2026-01-27 09:10:02.068818629 +0000 UTC m=+986.359913470" Jan 27 09:10:02 crc kubenswrapper[4985]: I0127 09:10:02.089828 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" podStartSLOduration=34.716353259 podStartE2EDuration="37.089802154s" podCreationTimestamp="2026-01-27 09:09:25 +0000 UTC" firstStartedPulling="2026-01-27 09:09:58.504335753 +0000 UTC m=+982.795430594" lastFinishedPulling="2026-01-27 09:10:00.877784648 +0000 UTC m=+985.168879489" observedRunningTime="2026-01-27 09:10:02.08673409 +0000 UTC m=+986.377828931" watchObservedRunningTime="2026-01-27 09:10:02.089802154 +0000 UTC m=+986.380897025" Jan 27 09:10:06 crc kubenswrapper[4985]: I0127 09:10:06.136387 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-gbl76" Jan 27 09:10:06 crc kubenswrapper[4985]: I0127 09:10:06.618453 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-g9wnq" Jan 27 09:10:06 crc kubenswrapper[4985]: I0127 09:10:06.706676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-wvnwv" Jan 27 09:10:07 crc kubenswrapper[4985]: I0127 09:10:07.077372 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-hwbtq" Jan 27 09:10:07 crc kubenswrapper[4985]: I0127 09:10:07.161386 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-bv8sl" Jan 27 09:10:07 crc kubenswrapper[4985]: I0127 09:10:07.272083 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-phnxc" Jan 27 09:10:08 crc kubenswrapper[4985]: I0127 09:10:08.106756 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-54sgw" Jan 27 09:10:08 crc kubenswrapper[4985]: I0127 09:10:08.612296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf" Jan 27 09:10:09 crc kubenswrapper[4985]: I0127 09:10:09.170950 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-lxlgc" Jan 27 09:10:11 crc kubenswrapper[4985]: I0127 09:10:11.828362 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:10:11 crc kubenswrapper[4985]: I0127 09:10:11.828962 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.484675 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.486601 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.488769 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7fbx7" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.490232 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.490244 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.491341 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.498303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.542831 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.543890 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.549287 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.563845 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.585878 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.585931 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbfwl\" (UniqueName: \"kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.687595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbfwl\" (UniqueName: \"kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.688793 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.688968 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.689048 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dtw\" (UniqueName: \"kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.689174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.690077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.711774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbfwl\" (UniqueName: \"kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl\") pod \"dnsmasq-dns-84bb9d8bd9-frk9p\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.790255 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.790410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.790483 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5dtw\" (UniqueName: \"kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.791812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.791987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.804109 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.817767 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5dtw\" (UniqueName: \"kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw\") pod \"dnsmasq-dns-5f854695bc-g64kg\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:23 crc kubenswrapper[4985]: I0127 09:10:23.869807 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:24 crc kubenswrapper[4985]: I0127 09:10:24.111862 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:10:24 crc kubenswrapper[4985]: I0127 09:10:24.113965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:24 crc kubenswrapper[4985]: I0127 09:10:24.186888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" event={"ID":"4511a699-2594-42e1-a221-644ca46a947b","Type":"ContainerStarted","Data":"e7d5bcde5d4cb94f91f425d6c7a2c88e513a1b55815b5bb9bb8e290b11f3cabd"} Jan 27 09:10:24 crc kubenswrapper[4985]: W0127 09:10:24.350736 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77f0b8a0_1c09_4604_8516_d87c930e9b14.slice/crio-d63a2b96813d76db26ce52fb0ffa5dfeb990ac3011039cfd3a5301a17d2f6534 WatchSource:0}: Error finding container d63a2b96813d76db26ce52fb0ffa5dfeb990ac3011039cfd3a5301a17d2f6534: Status 404 returned error can't find the container with id d63a2b96813d76db26ce52fb0ffa5dfeb990ac3011039cfd3a5301a17d2f6534 Jan 27 09:10:24 crc kubenswrapper[4985]: I0127 09:10:24.352463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:25 crc kubenswrapper[4985]: I0127 09:10:25.198017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" event={"ID":"77f0b8a0-1c09-4604-8516-d87c930e9b14","Type":"ContainerStarted","Data":"d63a2b96813d76db26ce52fb0ffa5dfeb990ac3011039cfd3a5301a17d2f6534"} Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.350524 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.387678 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.389449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.412433 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.442992 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9flj\" (UniqueName: \"kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.443051 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.443250 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.546334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9flj\" (UniqueName: \"kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.546845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.547194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.550395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.553451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.578845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9flj\" (UniqueName: \"kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj\") pod \"dnsmasq-dns-744ffd65bc-tdwsb\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.690947 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.711957 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.717388 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.718716 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.734245 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.852247 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.852341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvpt\" (UniqueName: \"kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.852398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.953741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.953843 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvpt\" (UniqueName: \"kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.953878 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.955285 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.956532 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:26 crc kubenswrapper[4985]: I0127 09:10:26.978566 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvpt\" (UniqueName: \"kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt\") pod \"dnsmasq-dns-95f5f6995-svzqc\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.046599 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.320851 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.580667 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.588784 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.591991 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p5fb2" Jan 27 09:10:27 crc kubenswrapper[4985]: W0127 09:10:27.592268 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd99c676e_d269_4d8a_a67a_71d591467b7c.slice/crio-95dca40623f40802187a4523ba4db1408b08b22f806f0a9510698a281cf542ab WatchSource:0}: Error finding container 95dca40623f40802187a4523ba4db1408b08b22f806f0a9510698a281cf542ab: Status 404 returned error can't find the container with id 95dca40623f40802187a4523ba4db1408b08b22f806f0a9510698a281cf542ab Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.593032 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.593747 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.593980 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.593932 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.593944 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.594324 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.595745 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.615316 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663713 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663793 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663819 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663848 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663921 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663938 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp59\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.663974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.664002 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.770179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.770365 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.770407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp59\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.770437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.770479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772040 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772081 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772335 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.772582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.774009 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.774660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.774923 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.775651 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.776843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.777206 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.777933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.783434 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.785973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.794971 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp59\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.805009 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.884056 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.886680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.890137 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-86gqj" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.890856 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.890927 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.891457 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.891797 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.891902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.894130 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.897293 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.965122 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978351 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978417 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978465 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978577 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:27 crc kubenswrapper[4985]: I0127 09:10:27.978648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sdq4\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079893 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079908 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079928 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.079971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.080019 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.080058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.080074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sdq4\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.081721 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.082703 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.083783 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.089270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.089670 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.092492 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.094416 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.098843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.102227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.115069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sdq4\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.116432 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.128260 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.228895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.284651 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" event={"ID":"1aede4b4-2eaf-445c-9c8c-5cc8839d9609","Type":"ContainerStarted","Data":"968fcaae34afdc0895ab7d8eada7897e858cac8dacb7008c2f5c0470f6723fa3"} Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.332251 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" event={"ID":"d99c676e-d269-4d8a-a67a-71d591467b7c","Type":"ContainerStarted","Data":"95dca40623f40802187a4523ba4db1408b08b22f806f0a9510698a281cf542ab"} Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.731576 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:10:28 crc kubenswrapper[4985]: I0127 09:10:28.865023 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.060497 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.062120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.064737 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.064900 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nst6q" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.065012 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.065356 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.072984 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.084769 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kolla-config\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199711 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199735 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-default\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199776 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199807 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbjp\" (UniqueName: \"kubernetes.io/projected/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kube-api-access-lhbjp\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.199855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.300969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301019 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-default\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301093 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301111 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbjp\" (UniqueName: \"kubernetes.io/projected/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kube-api-access-lhbjp\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.301183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kolla-config\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.302346 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kolla-config\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.304815 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.305750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.306781 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-default\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.307000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.310850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.336855 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbjp\" (UniqueName: \"kubernetes.io/projected/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-kube-api-access-lhbjp\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.338972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba28b990-460d-4a2c-b9b5-73f24d9b3f9e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.370149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e\") " pod="openstack/openstack-galera-0" Jan 27 09:10:29 crc kubenswrapper[4985]: I0127 09:10:29.394000 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.519791 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.524543 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.530324 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.530442 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.531192 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m9f62" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.532885 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.540650 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.728732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729027 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729117 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729178 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wghj7\" (UniqueName: \"kubernetes.io/projected/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kube-api-access-wghj7\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.729437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832379 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wghj7\" (UniqueName: \"kubernetes.io/projected/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kube-api-access-wghj7\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832597 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832983 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.832634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.833205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.833478 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.833946 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.834036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.834204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.862799 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.862893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.865094 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.879306 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wghj7\" (UniqueName: \"kubernetes.io/projected/a0bcbfae-acfe-4ef3-8b04-18f21c728fd6-kube-api-access-wghj7\") pod \"openstack-cell1-galera-0\" (UID: \"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.910268 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.911197 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.913151 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.913471 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pp2gt" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.913723 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 09:10:30 crc kubenswrapper[4985]: I0127 09:10:30.926802 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.035163 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmq6\" (UniqueName: \"kubernetes.io/projected/eb24812f-9480-484a-9f96-22d35c1a63d2-kube-api-access-wjmq6\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.035216 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-config-data\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.035286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.035309 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.035342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-kolla-config\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.137044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.137108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.137145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-kolla-config\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.137188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-config-data\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.137211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmq6\" (UniqueName: \"kubernetes.io/projected/eb24812f-9480-484a-9f96-22d35c1a63d2-kube-api-access-wjmq6\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.138383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-config-data\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.138738 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb24812f-9480-484a-9f96-22d35c1a63d2-kolla-config\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.143052 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.159133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb24812f-9480-484a-9f96-22d35c1a63d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.163594 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.186197 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmq6\" (UniqueName: \"kubernetes.io/projected/eb24812f-9480-484a-9f96-22d35c1a63d2-kube-api-access-wjmq6\") pod \"memcached-0\" (UID: \"eb24812f-9480-484a-9f96-22d35c1a63d2\") " pod="openstack/memcached-0" Jan 27 09:10:31 crc kubenswrapper[4985]: I0127 09:10:31.244130 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.577878 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.579577 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.583757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jtm6n" Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.600721 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.786796 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfr28\" (UniqueName: \"kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28\") pod \"kube-state-metrics-0\" (UID: \"92c780bc-e214-4b55-9c3e-2a09b962ac83\") " pod="openstack/kube-state-metrics-0" Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.889171 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfr28\" (UniqueName: \"kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28\") pod \"kube-state-metrics-0\" (UID: \"92c780bc-e214-4b55-9c3e-2a09b962ac83\") " pod="openstack/kube-state-metrics-0" Jan 27 09:10:32 crc kubenswrapper[4985]: I0127 09:10:32.907225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfr28\" (UniqueName: \"kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28\") pod \"kube-state-metrics-0\" (UID: \"92c780bc-e214-4b55-9c3e-2a09b962ac83\") " pod="openstack/kube-state-metrics-0" Jan 27 09:10:33 crc kubenswrapper[4985]: I0127 09:10:33.203914 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:10:35 crc kubenswrapper[4985]: I0127 09:10:35.386328 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerStarted","Data":"47b2bc497829bd544ccc25547098168477ad896767fd14e6a4ee5d37df163666"} Jan 27 09:10:35 crc kubenswrapper[4985]: I0127 09:10:35.388719 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerStarted","Data":"af2849bdb115a5c32fd04d8be049596ef021d8ea9fa777707f2e97c0c0cc7363"} Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.625117 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.626392 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.630093 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.630306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.630493 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.630652 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gldt7" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.630782 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.651438 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842ct\" (UniqueName: \"kubernetes.io/projected/17e67cba-f1d0-4144-ace7-49373081babb-kube-api-access-842ct\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756417 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-config\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.756482 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17e67cba-f1d0-4144-ace7-49373081babb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.858791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.858973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-842ct\" (UniqueName: \"kubernetes.io/projected/17e67cba-f1d0-4144-ace7-49373081babb-kube-api-access-842ct\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859207 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-config\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859238 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859339 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17e67cba-f1d0-4144-ace7-49373081babb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.859349 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.860754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17e67cba-f1d0-4144-ace7-49373081babb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.861797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-config\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.861954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17e67cba-f1d0-4144-ace7-49373081babb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.867150 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.869174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.878957 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e67cba-f1d0-4144-ace7-49373081babb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.888824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-842ct\" (UniqueName: \"kubernetes.io/projected/17e67cba-f1d0-4144-ace7-49373081babb-kube-api-access-842ct\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.891761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"17e67cba-f1d0-4144-ace7-49373081babb\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:36 crc kubenswrapper[4985]: I0127 09:10:36.962407 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.261883 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2zjxh"] Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.262849 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.264731 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-595ss" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.267100 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.274000 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.276016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2zjxh"] Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368275 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-ovn-controller-tls-certs\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368363 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-log-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368540 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqq2\" (UniqueName: \"kubernetes.io/projected/d2f52eee-3926-4ed2-9058-4e159f11a6cf-kube-api-access-bwqq2\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368590 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2f52eee-3926-4ed2-9058-4e159f11a6cf-scripts\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.368620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-combined-ca-bundle\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.396588 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-m82tc"] Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.400130 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.412581 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m82tc"] Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-log-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469798 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwqq2\" (UniqueName: \"kubernetes.io/projected/d2f52eee-3926-4ed2-9058-4e159f11a6cf-kube-api-access-bwqq2\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469844 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2f52eee-3926-4ed2-9058-4e159f11a6cf-scripts\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-combined-ca-bundle\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469946 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-ovn-controller-tls-certs\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.469994 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.470023 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.470725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.470860 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-run-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.472889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2f52eee-3926-4ed2-9058-4e159f11a6cf-scripts\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.472982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f52eee-3926-4ed2-9058-4e159f11a6cf-var-log-ovn\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.477869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-combined-ca-bundle\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.481367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f52eee-3926-4ed2-9058-4e159f11a6cf-ovn-controller-tls-certs\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.490306 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwqq2\" (UniqueName: \"kubernetes.io/projected/d2f52eee-3926-4ed2-9058-4e159f11a6cf-kube-api-access-bwqq2\") pod \"ovn-controller-2zjxh\" (UID: \"d2f52eee-3926-4ed2-9058-4e159f11a6cf\") " pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.571533 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-lib\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.571906 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-run\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.571936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-etc-ovs\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.571964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-log\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.571986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwmz\" (UniqueName: \"kubernetes.io/projected/48fb2403-f98b-4166-9470-f95e5002e52d-kube-api-access-rfwmz\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.572171 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48fb2403-f98b-4166-9470-f95e5002e52d-scripts\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.643575 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673759 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-lib\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673829 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-run\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673858 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-etc-ovs\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-log\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwmz\" (UniqueName: \"kubernetes.io/projected/48fb2403-f98b-4166-9470-f95e5002e52d-kube-api-access-rfwmz\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48fb2403-f98b-4166-9470-f95e5002e52d-scripts\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.673970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-run\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.674026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-lib\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.674131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-var-log\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.674197 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48fb2403-f98b-4166-9470-f95e5002e52d-etc-ovs\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.676032 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48fb2403-f98b-4166-9470-f95e5002e52d-scripts\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.703219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwmz\" (UniqueName: \"kubernetes.io/projected/48fb2403-f98b-4166-9470-f95e5002e52d-kube-api-access-rfwmz\") pod \"ovn-controller-ovs-m82tc\" (UID: \"48fb2403-f98b-4166-9470-f95e5002e52d\") " pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:37 crc kubenswrapper[4985]: I0127 09:10:37.724363 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:38 crc kubenswrapper[4985]: I0127 09:10:38.076453 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.019230 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.022029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.025116 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.025396 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.025585 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.025772 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l7t6m" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.045760 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.121925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-config\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122103 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122253 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122287 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122395 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85g6\" (UniqueName: \"kubernetes.io/projected/42f570db-f357-4a94-8895-25d887fc8d3c-kube-api-access-s85g6\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.122444 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223434 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223502 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223538 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223588 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85g6\" (UniqueName: \"kubernetes.io/projected/42f570db-f357-4a94-8895-25d887fc8d3c-kube-api-access-s85g6\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223615 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.223641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-config\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.225298 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-config\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.225356 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.225633 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.231272 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.231290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.235488 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42f570db-f357-4a94-8895-25d887fc8d3c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.238937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f570db-f357-4a94-8895-25d887fc8d3c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.246400 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85g6\" (UniqueName: \"kubernetes.io/projected/42f570db-f357-4a94-8895-25d887fc8d3c-kube-api-access-s85g6\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.254293 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"42f570db-f357-4a94-8895-25d887fc8d3c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:40 crc kubenswrapper[4985]: I0127 09:10:40.354736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 09:10:41 crc kubenswrapper[4985]: I0127 09:10:41.828577 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:10:41 crc kubenswrapper[4985]: I0127 09:10:41.828632 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:10:42 crc kubenswrapper[4985]: W0127 09:10:42.771783 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c780bc_e214_4b55_9c3e_2a09b962ac83.slice/crio-f38f14bacd59f039efbb7f85eddbe0923646ee9c98e9a9275433ea633025fb8e WatchSource:0}: Error finding container f38f14bacd59f039efbb7f85eddbe0923646ee9c98e9a9275433ea633025fb8e: Status 404 returned error can't find the container with id f38f14bacd59f039efbb7f85eddbe0923646ee9c98e9a9275433ea633025fb8e Jan 27 09:10:42 crc kubenswrapper[4985]: I0127 09:10:42.979153 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:10:43 crc kubenswrapper[4985]: I0127 09:10:43.467783 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92c780bc-e214-4b55-9c3e-2a09b962ac83","Type":"ContainerStarted","Data":"f38f14bacd59f039efbb7f85eddbe0923646ee9c98e9a9275433ea633025fb8e"} Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.635466 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.635681 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbfwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-frk9p_openstack(4511a699-2594-42e1-a221-644ca46a947b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.637108 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" podUID="4511a699-2594-42e1-a221-644ca46a947b" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.644430 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.644665 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfvpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-svzqc_openstack(d99c676e-d269-4d8a-a67a-71d591467b7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.646726 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.658539 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.658738 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5dtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-g64kg_openstack(77f0b8a0-1c09-4604-8516-d87c930e9b14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.660052 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" podUID="77f0b8a0-1c09-4604-8516-d87c930e9b14" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.665295 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.665448 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9flj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-tdwsb_openstack(1aede4b4-2eaf-445c-9c8c-5cc8839d9609): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:10:43 crc kubenswrapper[4985]: E0127 09:10:43.667690 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" Jan 27 09:10:44 crc kubenswrapper[4985]: E0127 09:10:44.486635 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" Jan 27 09:10:44 crc kubenswrapper[4985]: E0127 09:10:44.487584 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" Jan 27 09:10:44 crc kubenswrapper[4985]: W0127 09:10:44.915134 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba28b990_460d_4a2c_b9b5_73f24d9b3f9e.slice/crio-50b49bf66480adfb2e0f33537d2611ecaf630927c28eba136a59fe94228bdbe0 WatchSource:0}: Error finding container 50b49bf66480adfb2e0f33537d2611ecaf630927c28eba136a59fe94228bdbe0: Status 404 returned error can't find the container with id 50b49bf66480adfb2e0f33537d2611ecaf630927c28eba136a59fe94228bdbe0 Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.131620 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.144499 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.223432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config\") pod \"77f0b8a0-1c09-4604-8516-d87c930e9b14\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.223688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5dtw\" (UniqueName: \"kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw\") pod \"77f0b8a0-1c09-4604-8516-d87c930e9b14\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.223733 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc\") pod \"77f0b8a0-1c09-4604-8516-d87c930e9b14\" (UID: \"77f0b8a0-1c09-4604-8516-d87c930e9b14\") " Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.223787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config\") pod \"4511a699-2594-42e1-a221-644ca46a947b\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.223829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbfwl\" (UniqueName: \"kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl\") pod \"4511a699-2594-42e1-a221-644ca46a947b\" (UID: \"4511a699-2594-42e1-a221-644ca46a947b\") " Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.226010 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config" (OuterVolumeSpecName: "config") pod "77f0b8a0-1c09-4604-8516-d87c930e9b14" (UID: "77f0b8a0-1c09-4604-8516-d87c930e9b14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.226612 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "77f0b8a0-1c09-4604-8516-d87c930e9b14" (UID: "77f0b8a0-1c09-4604-8516-d87c930e9b14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.227017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config" (OuterVolumeSpecName: "config") pod "4511a699-2594-42e1-a221-644ca46a947b" (UID: "4511a699-2594-42e1-a221-644ca46a947b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.232348 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl" (OuterVolumeSpecName: "kube-api-access-mbfwl") pod "4511a699-2594-42e1-a221-644ca46a947b" (UID: "4511a699-2594-42e1-a221-644ca46a947b"). InnerVolumeSpecName "kube-api-access-mbfwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.235939 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw" (OuterVolumeSpecName: "kube-api-access-q5dtw") pod "77f0b8a0-1c09-4604-8516-d87c930e9b14" (UID: "77f0b8a0-1c09-4604-8516-d87c930e9b14"). InnerVolumeSpecName "kube-api-access-q5dtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.288392 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.326445 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5dtw\" (UniqueName: \"kubernetes.io/projected/77f0b8a0-1c09-4604-8516-d87c930e9b14-kube-api-access-q5dtw\") on node \"crc\" DevicePath \"\"" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.326489 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.326504 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4511a699-2594-42e1-a221-644ca46a947b-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.326537 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbfwl\" (UniqueName: \"kubernetes.io/projected/4511a699-2594-42e1-a221-644ca46a947b-kube-api-access-mbfwl\") on node \"crc\" DevicePath \"\"" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.326549 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f0b8a0-1c09-4604-8516-d87c930e9b14-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.490258 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" event={"ID":"4511a699-2594-42e1-a221-644ca46a947b","Type":"ContainerDied","Data":"e7d5bcde5d4cb94f91f425d6c7a2c88e513a1b55815b5bb9bb8e290b11f3cabd"} Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.490293 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-frk9p" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.493503 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.493538 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-g64kg" event={"ID":"77f0b8a0-1c09-4604-8516-d87c930e9b14","Type":"ContainerDied","Data":"d63a2b96813d76db26ce52fb0ffa5dfeb990ac3011039cfd3a5301a17d2f6534"} Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.504836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb24812f-9480-484a-9f96-22d35c1a63d2","Type":"ContainerStarted","Data":"d5b34db19e661e7ecb557b5215745bd63701948bd2ac94cf9ed6d2f855222e20"} Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.516892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e","Type":"ContainerStarted","Data":"50b49bf66480adfb2e0f33537d2611ecaf630927c28eba136a59fe94228bdbe0"} Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.608638 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.671630 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-frk9p"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.683580 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.746248 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.793113 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-g64kg"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.816646 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2zjxh"] Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.834283 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:10:45 crc kubenswrapper[4985]: W0127 09:10:45.897896 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42f570db_f357_4a94_8895_25d887fc8d3c.slice/crio-819a66326448864dc9efa196dcad82a0a22205db1bf26de65021354e7fc7499c WatchSource:0}: Error finding container 819a66326448864dc9efa196dcad82a0a22205db1bf26de65021354e7fc7499c: Status 404 returned error can't find the container with id 819a66326448864dc9efa196dcad82a0a22205db1bf26de65021354e7fc7499c Jan 27 09:10:45 crc kubenswrapper[4985]: I0127 09:10:45.904972 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.466719 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4511a699-2594-42e1-a221-644ca46a947b" path="/var/lib/kubelet/pods/4511a699-2594-42e1-a221-644ca46a947b/volumes" Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.467899 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f0b8a0-1c09-4604-8516-d87c930e9b14" path="/var/lib/kubelet/pods/77f0b8a0-1c09-4604-8516-d87c930e9b14/volumes" Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.524694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"42f570db-f357-4a94-8895-25d887fc8d3c","Type":"ContainerStarted","Data":"819a66326448864dc9efa196dcad82a0a22205db1bf26de65021354e7fc7499c"} Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.526089 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2zjxh" event={"ID":"d2f52eee-3926-4ed2-9058-4e159f11a6cf","Type":"ContainerStarted","Data":"1e064ec70f0fd809dc18bd010208f98777846af883728dfe04172852924136a5"} Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.527058 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17e67cba-f1d0-4144-ace7-49373081babb","Type":"ContainerStarted","Data":"93357884f9eb71b057497546e27efb6f16cbb289954ced53d505398c5f44a057"} Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.529038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6","Type":"ContainerStarted","Data":"bc14d349e73f59e01fbf4b19afa02a6e4f4d2b6aed63241416063cd6f2030f41"} Jan 27 09:10:46 crc kubenswrapper[4985]: I0127 09:10:46.687475 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m82tc"] Jan 27 09:10:46 crc kubenswrapper[4985]: W0127 09:10:46.691778 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48fb2403_f98b_4166_9470_f95e5002e52d.slice/crio-afbcce22f7c76fc8ed4562cfbf4da1bb8ff668263570c5a63d3e48f7bb2456cc WatchSource:0}: Error finding container afbcce22f7c76fc8ed4562cfbf4da1bb8ff668263570c5a63d3e48f7bb2456cc: Status 404 returned error can't find the container with id afbcce22f7c76fc8ed4562cfbf4da1bb8ff668263570c5a63d3e48f7bb2456cc Jan 27 09:10:47 crc kubenswrapper[4985]: I0127 09:10:47.538070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m82tc" event={"ID":"48fb2403-f98b-4166-9470-f95e5002e52d","Type":"ContainerStarted","Data":"afbcce22f7c76fc8ed4562cfbf4da1bb8ff668263570c5a63d3e48f7bb2456cc"} Jan 27 09:10:49 crc kubenswrapper[4985]: I0127 09:10:49.572948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerStarted","Data":"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e"} Jan 27 09:10:49 crc kubenswrapper[4985]: I0127 09:10:49.576324 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerStarted","Data":"cea1414e3344dd8ffd89d82148d82d04e5425f1dd069adc9bd7855c688b77608"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.639498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2zjxh" event={"ID":"d2f52eee-3926-4ed2-9058-4e159f11a6cf","Type":"ContainerStarted","Data":"4e0978fee56891d5d287d1e2a6d94c712f25f92617c659b22087c32da499ce7d"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.640119 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2zjxh" Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.640840 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17e67cba-f1d0-4144-ace7-49373081babb","Type":"ContainerStarted","Data":"c17b6a167532a5344fedb9e38c91637b82edc3752d2f83680b1fa3b81f4e9e28"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.642932 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92c780bc-e214-4b55-9c3e-2a09b962ac83","Type":"ContainerStarted","Data":"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.643058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.646373 4985 generic.go:334] "Generic (PLEG): container finished" podID="48fb2403-f98b-4166-9470-f95e5002e52d" containerID="8800d3216777717a7bf93a06721a8bcf6217833b989eada4027e3b07aeb41f3b" exitCode=0 Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.646609 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m82tc" event={"ID":"48fb2403-f98b-4166-9470-f95e5002e52d","Type":"ContainerDied","Data":"8800d3216777717a7bf93a06721a8bcf6217833b989eada4027e3b07aeb41f3b"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.648171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6","Type":"ContainerStarted","Data":"4f17a919d41a437b91d8d389eb673c17d513014a0164d50f5fd3bfcd3e98fc1c"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.651186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb24812f-9480-484a-9f96-22d35c1a63d2","Type":"ContainerStarted","Data":"a67597038e635060eb6beced4b3d0fe6475157e450bc1096e3a98ae08d36e96a"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.651315 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.654163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e","Type":"ContainerStarted","Data":"a04f6e6c816b807655499e2900b04e46ccb32fa9d89332e4d6167b356bd289b4"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.666380 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"42f570db-f357-4a94-8895-25d887fc8d3c","Type":"ContainerStarted","Data":"5c9146d943a0ad54b5a342002735c92aef15ae47447cfad24c15a0298b89975e"} Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.678885 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2zjxh" podStartSLOduration=10.202090827 podStartE2EDuration="18.678856927s" podCreationTimestamp="2026-01-27 09:10:37 +0000 UTC" firstStartedPulling="2026-01-27 09:10:45.792832936 +0000 UTC m=+1030.083927777" lastFinishedPulling="2026-01-27 09:10:54.269599036 +0000 UTC m=+1038.560693877" observedRunningTime="2026-01-27 09:10:55.65887607 +0000 UTC m=+1039.949970921" watchObservedRunningTime="2026-01-27 09:10:55.678856927 +0000 UTC m=+1039.969951778" Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.693037 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.018522247 podStartE2EDuration="25.693004245s" podCreationTimestamp="2026-01-27 09:10:30 +0000 UTC" firstStartedPulling="2026-01-27 09:10:45.307472385 +0000 UTC m=+1029.598567226" lastFinishedPulling="2026-01-27 09:10:53.981954383 +0000 UTC m=+1038.273049224" observedRunningTime="2026-01-27 09:10:55.677665394 +0000 UTC m=+1039.968760265" watchObservedRunningTime="2026-01-27 09:10:55.693004245 +0000 UTC m=+1039.984099086" Jan 27 09:10:55 crc kubenswrapper[4985]: I0127 09:10:55.778961 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.784753874 podStartE2EDuration="23.77892841s" podCreationTimestamp="2026-01-27 09:10:32 +0000 UTC" firstStartedPulling="2026-01-27 09:10:42.783783331 +0000 UTC m=+1027.074878182" lastFinishedPulling="2026-01-27 09:10:54.777957877 +0000 UTC m=+1039.069052718" observedRunningTime="2026-01-27 09:10:55.771019993 +0000 UTC m=+1040.062114844" watchObservedRunningTime="2026-01-27 09:10:55.77892841 +0000 UTC m=+1040.070023251" Jan 27 09:10:56 crc kubenswrapper[4985]: I0127 09:10:56.679047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m82tc" event={"ID":"48fb2403-f98b-4166-9470-f95e5002e52d","Type":"ContainerStarted","Data":"abefa1daaf654dd5d2b5197f5ee98ab5d1724d971ee80a7c9d34340bc4be54bf"} Jan 27 09:10:56 crc kubenswrapper[4985]: I0127 09:10:56.679386 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m82tc" event={"ID":"48fb2403-f98b-4166-9470-f95e5002e52d","Type":"ContainerStarted","Data":"bd3558191ec572588e299a10478902af22dcf973b776162dbeb396d45e52aed9"} Jan 27 09:10:56 crc kubenswrapper[4985]: I0127 09:10:56.710127 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-m82tc" podStartSLOduration=12.231914596 podStartE2EDuration="19.710100049s" podCreationTimestamp="2026-01-27 09:10:37 +0000 UTC" firstStartedPulling="2026-01-27 09:10:46.694774824 +0000 UTC m=+1030.985869665" lastFinishedPulling="2026-01-27 09:10:54.172960277 +0000 UTC m=+1038.464055118" observedRunningTime="2026-01-27 09:10:56.701778481 +0000 UTC m=+1040.992873322" watchObservedRunningTime="2026-01-27 09:10:56.710100049 +0000 UTC m=+1041.001194900" Jan 27 09:10:57 crc kubenswrapper[4985]: I0127 09:10:57.701707 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:57 crc kubenswrapper[4985]: I0127 09:10:57.702006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.734265 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"42f570db-f357-4a94-8895-25d887fc8d3c","Type":"ContainerStarted","Data":"58683fbc9eb61fdb24f29305a5f2674a65f86d6fa74b16cd3c0153b74546fe8c"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.737369 4985 generic.go:334] "Generic (PLEG): container finished" podID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerID="2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c" exitCode=0 Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.737462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" event={"ID":"1aede4b4-2eaf-445c-9c8c-5cc8839d9609","Type":"ContainerDied","Data":"2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.742106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17e67cba-f1d0-4144-ace7-49373081babb","Type":"ContainerStarted","Data":"9ee9ee7e4ebbb8154796eb930a2ac67f7981d5d92827c5a277011a2e54b98d59"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.762895 4985 generic.go:334] "Generic (PLEG): container finished" podID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerID="90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc" exitCode=0 Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.763266 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" event={"ID":"d99c676e-d269-4d8a-a67a-71d591467b7c","Type":"ContainerDied","Data":"90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.768618 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0bcbfae-acfe-4ef3-8b04-18f21c728fd6" containerID="4f17a919d41a437b91d8d389eb673c17d513014a0164d50f5fd3bfcd3e98fc1c" exitCode=0 Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.768883 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6","Type":"ContainerDied","Data":"4f17a919d41a437b91d8d389eb673c17d513014a0164d50f5fd3bfcd3e98fc1c"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.774140 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.055095219 podStartE2EDuration="21.77411407s" podCreationTimestamp="2026-01-27 09:10:38 +0000 UTC" firstStartedPulling="2026-01-27 09:10:45.900814085 +0000 UTC m=+1030.191908926" lastFinishedPulling="2026-01-27 09:10:58.619832896 +0000 UTC m=+1042.910927777" observedRunningTime="2026-01-27 09:10:59.768956739 +0000 UTC m=+1044.060051590" watchObservedRunningTime="2026-01-27 09:10:59.77411407 +0000 UTC m=+1044.065208911" Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.780367 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba28b990-460d-4a2c-b9b5-73f24d9b3f9e" containerID="a04f6e6c816b807655499e2900b04e46ccb32fa9d89332e4d6167b356bd289b4" exitCode=0 Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.780405 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e","Type":"ContainerDied","Data":"a04f6e6c816b807655499e2900b04e46ccb32fa9d89332e4d6167b356bd289b4"} Jan 27 09:10:59 crc kubenswrapper[4985]: I0127 09:10:59.830977 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.062568353 podStartE2EDuration="24.830744382s" podCreationTimestamp="2026-01-27 09:10:35 +0000 UTC" firstStartedPulling="2026-01-27 09:10:45.849898589 +0000 UTC m=+1030.140993430" lastFinishedPulling="2026-01-27 09:10:58.618074608 +0000 UTC m=+1042.909169459" observedRunningTime="2026-01-27 09:10:59.82301482 +0000 UTC m=+1044.114109681" watchObservedRunningTime="2026-01-27 09:10:59.830744382 +0000 UTC m=+1044.121839243" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.356025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.796253 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" event={"ID":"d99c676e-d269-4d8a-a67a-71d591467b7c","Type":"ContainerStarted","Data":"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216"} Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.796991 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.804410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0bcbfae-acfe-4ef3-8b04-18f21c728fd6","Type":"ContainerStarted","Data":"33d9408624e73e5da9c9a9409e434f4f122845208df0cc80c93d66d9f4411a99"} Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.808839 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ba28b990-460d-4a2c-b9b5-73f24d9b3f9e","Type":"ContainerStarted","Data":"d32fd9386706b474ab7596ce993a127c7e3606b050052375bdc6247a7b692e26"} Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.811704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" event={"ID":"1aede4b4-2eaf-445c-9c8c-5cc8839d9609","Type":"ContainerStarted","Data":"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96"} Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.811807 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-68j75"] Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.815606 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.817605 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.857464 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-68j75"] Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.861353 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" podStartSLOduration=3.839461585 podStartE2EDuration="34.861326845s" podCreationTimestamp="2026-01-27 09:10:26 +0000 UTC" firstStartedPulling="2026-01-27 09:10:27.598017778 +0000 UTC m=+1011.889112619" lastFinishedPulling="2026-01-27 09:10:58.619883048 +0000 UTC m=+1042.910977879" observedRunningTime="2026-01-27 09:11:00.851209318 +0000 UTC m=+1045.142304169" watchObservedRunningTime="2026-01-27 09:11:00.861326845 +0000 UTC m=+1045.152421686" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.905738 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.487067719 podStartE2EDuration="32.905719112s" podCreationTimestamp="2026-01-27 09:10:28 +0000 UTC" firstStartedPulling="2026-01-27 09:10:44.935796138 +0000 UTC m=+1029.226890969" lastFinishedPulling="2026-01-27 09:10:54.354447521 +0000 UTC m=+1038.645542362" observedRunningTime="2026-01-27 09:11:00.900771166 +0000 UTC m=+1045.191866007" watchObservedRunningTime="2026-01-27 09:11:00.905719112 +0000 UTC m=+1045.196813953" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.942814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovn-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.943146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-combined-ca-bundle\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.943180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvvl\" (UniqueName: \"kubernetes.io/projected/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-kube-api-access-2zvvl\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.943264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovs-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.943312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-config\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.943610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.950653 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.322118295 podStartE2EDuration="31.950623193s" podCreationTimestamp="2026-01-27 09:10:29 +0000 UTC" firstStartedPulling="2026-01-27 09:10:45.641258212 +0000 UTC m=+1029.932353053" lastFinishedPulling="2026-01-27 09:10:54.26976311 +0000 UTC m=+1038.560857951" observedRunningTime="2026-01-27 09:11:00.939324613 +0000 UTC m=+1045.230419474" watchObservedRunningTime="2026-01-27 09:11:00.950623193 +0000 UTC m=+1045.241718034" Jan 27 09:11:00 crc kubenswrapper[4985]: I0127 09:11:00.963427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.030706 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.032113 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" podStartSLOduration=3.734731595 podStartE2EDuration="35.032090355s" podCreationTimestamp="2026-01-27 09:10:26 +0000 UTC" firstStartedPulling="2026-01-27 09:10:27.326108246 +0000 UTC m=+1011.617203087" lastFinishedPulling="2026-01-27 09:10:58.623466996 +0000 UTC m=+1042.914561847" observedRunningTime="2026-01-27 09:11:00.987851533 +0000 UTC m=+1045.278946374" watchObservedRunningTime="2026-01-27 09:11:01.032090355 +0000 UTC m=+1045.323185196" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.032628 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-combined-ca-bundle\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvvl\" (UniqueName: \"kubernetes.io/projected/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-kube-api-access-2zvvl\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048530 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovs-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048554 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-config\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048623 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovn-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.048933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovn-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.049211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-ovs-rundir\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.050099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-config\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.059319 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-combined-ca-bundle\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.070055 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.073203 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.075115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.085429 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.095998 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.099054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvvl\" (UniqueName: \"kubernetes.io/projected/6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd-kube-api-access-2zvvl\") pod \"ovn-controller-metrics-68j75\" (UID: \"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd\") " pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.146914 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-68j75" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.150482 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh457\" (UniqueName: \"kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.150642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.150680 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.150759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.167358 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.167579 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.252637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.252687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh457\" (UniqueName: \"kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.252771 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.252798 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.253680 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.254201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.255031 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.256661 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.278587 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh457\" (UniqueName: \"kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457\") pod \"dnsmasq-dns-7878659675-7rrjk\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.357639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.360470 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.393010 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.394397 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.405628 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.407576 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.469772 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.470125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.470284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.470448 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.470774 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwst8\" (UniqueName: \"kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.478593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.538218 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.574015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.574126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.574163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.575703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.575755 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwst8\" (UniqueName: \"kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.575771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.575723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.576342 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.576754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.595465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwst8\" (UniqueName: \"kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8\") pod \"dnsmasq-dns-586b989cdc-v2s76\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.713105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.726815 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.827931 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="dnsmasq-dns" containerID="cri-o://aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96" gracePeriod=10 Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.828837 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.865557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-68j75"] Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.893363 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 09:11:01 crc kubenswrapper[4985]: I0127 09:11:01.904051 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.008451 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.214701 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.265901 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.267589 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.281175 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.281481 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.282305 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9hvkn" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.283967 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.296251 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.296312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-scripts\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.297081 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.297111 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.297131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh5r\" (UniqueName: \"kubernetes.io/projected/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-kube-api-access-cnh5r\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.297170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-config\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.297452 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.311001 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.379111 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.399140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9flj\" (UniqueName: \"kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj\") pod \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.399242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc\") pod \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.399268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config\") pod \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\" (UID: \"1aede4b4-2eaf-445c-9c8c-5cc8839d9609\") " Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-scripts\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400853 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh5r\" (UniqueName: \"kubernetes.io/projected/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-kube-api-access-cnh5r\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-config\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.400956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.406691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.407385 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.410402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.415058 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-config\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.415919 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-scripts\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.416784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.429236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh5r\" (UniqueName: \"kubernetes.io/projected/49843c3e-2aeb-43e9-8041-6f212a7fcc7c-kube-api-access-cnh5r\") pod \"ovn-northd-0\" (UID: \"49843c3e-2aeb-43e9-8041-6f212a7fcc7c\") " pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.437730 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj" (OuterVolumeSpecName: "kube-api-access-f9flj") pod "1aede4b4-2eaf-445c-9c8c-5cc8839d9609" (UID: "1aede4b4-2eaf-445c-9c8c-5cc8839d9609"). InnerVolumeSpecName "kube-api-access-f9flj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.446131 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1aede4b4-2eaf-445c-9c8c-5cc8839d9609" (UID: "1aede4b4-2eaf-445c-9c8c-5cc8839d9609"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.452315 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config" (OuterVolumeSpecName: "config") pod "1aede4b4-2eaf-445c-9c8c-5cc8839d9609" (UID: "1aede4b4-2eaf-445c-9c8c-5cc8839d9609"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.502733 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9flj\" (UniqueName: \"kubernetes.io/projected/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-kube-api-access-f9flj\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.502937 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.503012 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aede4b4-2eaf-445c-9c8c-5cc8839d9609-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.666431 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.848065 4985 generic.go:334] "Generic (PLEG): container finished" podID="617061c6-cd75-4259-979e-60b51d0de147" containerID="0dc21ddec21af989064d8a3eded6bc55fcbd01ca13d901a4914c88f99f5b03eb" exitCode=0 Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.848252 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" event={"ID":"617061c6-cd75-4259-979e-60b51d0de147","Type":"ContainerDied","Data":"0dc21ddec21af989064d8a3eded6bc55fcbd01ca13d901a4914c88f99f5b03eb"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.848693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" event={"ID":"617061c6-cd75-4259-979e-60b51d0de147","Type":"ContainerStarted","Data":"d1f86612d641f3867d569958ad0dc2993328734d4cd9b1a12f176270f9d254c3"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.852714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-68j75" event={"ID":"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd","Type":"ContainerStarted","Data":"a07ecca3b853f33b050430cc45cfdd822bdb911af69be22cc5973c1def6ef01f"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.852762 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-68j75" event={"ID":"6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd","Type":"ContainerStarted","Data":"9869bf4ddeead72dc2526d14c46afbef877b0d1df23b9058592b49f3ebd0f0e1"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.860688 4985 generic.go:334] "Generic (PLEG): container finished" podID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" containerID="3a5d681a60ff376ad138e69e7b1ce49aa8bcc483bdd173c277e7559cfa08ace1" exitCode=0 Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.860755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-7rrjk" event={"ID":"cf1cd7b0-ecf5-445e-8961-7e92b4130a28","Type":"ContainerDied","Data":"3a5d681a60ff376ad138e69e7b1ce49aa8bcc483bdd173c277e7559cfa08ace1"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.860780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-7rrjk" event={"ID":"cf1cd7b0-ecf5-445e-8961-7e92b4130a28","Type":"ContainerStarted","Data":"9be95473755512e1f047c8b3b8aa0c26cd9212f53683cfe961673547c7041f32"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.863066 4985 generic.go:334] "Generic (PLEG): container finished" podID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerID="aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96" exitCode=0 Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.863610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.864024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" event={"ID":"1aede4b4-2eaf-445c-9c8c-5cc8839d9609","Type":"ContainerDied","Data":"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.864044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tdwsb" event={"ID":"1aede4b4-2eaf-445c-9c8c-5cc8839d9609","Type":"ContainerDied","Data":"968fcaae34afdc0895ab7d8eada7897e858cac8dacb7008c2f5c0470f6723fa3"} Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.864061 4985 scope.go:117] "RemoveContainer" containerID="aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.865219 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="dnsmasq-dns" containerID="cri-o://f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216" gracePeriod=10 Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.930921 4985 scope.go:117] "RemoveContainer" containerID="2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.945279 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-68j75" podStartSLOduration=2.945258157 podStartE2EDuration="2.945258157s" podCreationTimestamp="2026-01-27 09:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:02.888769019 +0000 UTC m=+1047.179863870" watchObservedRunningTime="2026-01-27 09:11:02.945258157 +0000 UTC m=+1047.236352998" Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.962587 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:11:02 crc kubenswrapper[4985]: I0127 09:11:02.969431 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tdwsb"] Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.013534 4985 scope.go:117] "RemoveContainer" containerID="aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.024687 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96\": container with ID starting with aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96 not found: ID does not exist" containerID="aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.024747 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96"} err="failed to get container status \"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96\": rpc error: code = NotFound desc = could not find container \"aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96\": container with ID starting with aa3d868c129fd3da222aa929be4d2dc43dcd3fde819f49282147e9099babcc96 not found: ID does not exist" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.024778 4985 scope.go:117] "RemoveContainer" containerID="2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.025174 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c\": container with ID starting with 2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c not found: ID does not exist" containerID="2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.025212 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c"} err="failed to get container status \"2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c\": rpc error: code = NotFound desc = could not find container \"2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c\": container with ID starting with 2a06cd6a61bf01225cd70dfb61cec9ece4a8bf5fc27f003cca533eccd878ec8c not found: ID does not exist" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.069109 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.125050 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.155822 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="dnsmasq-dns" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.156265 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="dnsmasq-dns" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.156433 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="init" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.156812 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="init" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.157198 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" containerName="dnsmasq-dns" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.163163 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.163376 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.230407 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.230447 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9hw\" (UniqueName: \"kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.230469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.230547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.230600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.234850 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.333849 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.334187 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.334416 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.334614 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.334807 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9hw\" (UniqueName: \"kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.335803 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.336090 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.336499 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.336924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.344301 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.375691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9hw\" (UniqueName: \"kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw\") pod \"dnsmasq-dns-67fdf7998c-l6c45\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.568902 4985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 09:11:03 crc kubenswrapper[4985]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/cf1cd7b0-ecf5-445e-8961-7e92b4130a28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:03 crc kubenswrapper[4985]: > podSandboxID="9be95473755512e1f047c8b3b8aa0c26cd9212f53683cfe961673547c7041f32" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.569165 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 09:11:03 crc kubenswrapper[4985]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh5d7h8hd8h664h564hfbh5d4h5f5h55h5fch66h675hb8h65bh64dhbh5dchc9h66fh5dbhf4h658h64ch55bhbh65h55dh597h68dh579hbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh457,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7878659675-7rrjk_openstack(cf1cd7b0-ecf5-445e-8961-7e92b4130a28): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/cf1cd7b0-ecf5-445e-8961-7e92b4130a28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:03 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.570414 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/cf1cd7b0-ecf5-445e-8961-7e92b4130a28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7878659675-7rrjk" podUID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.593029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.637200 4985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 09:11:03 crc kubenswrapper[4985]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/617061c6-cd75-4259-979e-60b51d0de147/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:03 crc kubenswrapper[4985]: > podSandboxID="d1f86612d641f3867d569958ad0dc2993328734d4cd9b1a12f176270f9d254c3" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.637414 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 09:11:03 crc kubenswrapper[4985]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwst8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-586b989cdc-v2s76_openstack(617061c6-cd75-4259-979e-60b51d0de147): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/617061c6-cd75-4259-979e-60b51d0de147/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:03 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.638602 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/617061c6-cd75-4259-979e-60b51d0de147/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" podUID="617061c6-cd75-4259-979e-60b51d0de147" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.710366 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.745328 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc\") pod \"d99c676e-d269-4d8a-a67a-71d591467b7c\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.745373 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config\") pod \"d99c676e-d269-4d8a-a67a-71d591467b7c\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.745412 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfvpt\" (UniqueName: \"kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt\") pod \"d99c676e-d269-4d8a-a67a-71d591467b7c\" (UID: \"d99c676e-d269-4d8a-a67a-71d591467b7c\") " Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.751810 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt" (OuterVolumeSpecName: "kube-api-access-kfvpt") pod "d99c676e-d269-4d8a-a67a-71d591467b7c" (UID: "d99c676e-d269-4d8a-a67a-71d591467b7c"). InnerVolumeSpecName "kube-api-access-kfvpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.793350 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config" (OuterVolumeSpecName: "config") pod "d99c676e-d269-4d8a-a67a-71d591467b7c" (UID: "d99c676e-d269-4d8a-a67a-71d591467b7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.814505 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d99c676e-d269-4d8a-a67a-71d591467b7c" (UID: "d99c676e-d269-4d8a-a67a-71d591467b7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.848609 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.848648 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d99c676e-d269-4d8a-a67a-71d591467b7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.848673 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfvpt\" (UniqueName: \"kubernetes.io/projected/d99c676e-d269-4d8a-a67a-71d591467b7c-kube-api-access-kfvpt\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.873587 4985 generic.go:334] "Generic (PLEG): container finished" podID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerID="f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216" exitCode=0 Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.873685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" event={"ID":"d99c676e-d269-4d8a-a67a-71d591467b7c","Type":"ContainerDied","Data":"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216"} Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.873738 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" event={"ID":"d99c676e-d269-4d8a-a67a-71d591467b7c","Type":"ContainerDied","Data":"95dca40623f40802187a4523ba4db1408b08b22f806f0a9510698a281cf542ab"} Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.873793 4985 scope.go:117] "RemoveContainer" containerID="f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.873798 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-svzqc" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.875029 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49843c3e-2aeb-43e9-8041-6f212a7fcc7c","Type":"ContainerStarted","Data":"484d9cb6e2d1d7fb1bb0552f96e3ca711ddc5bfdb3d0d9f74924af21414f9f0c"} Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.892346 4985 scope.go:117] "RemoveContainer" containerID="90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.922257 4985 scope.go:117] "RemoveContainer" containerID="f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.935690 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.936413 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216\": container with ID starting with f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216 not found: ID does not exist" containerID="f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.936451 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216"} err="failed to get container status \"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216\": rpc error: code = NotFound desc = could not find container \"f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216\": container with ID starting with f6b2939f2240bc5fa4c229c1de71f67398a7bc094c65a5a8d9356c2adb79c216 not found: ID does not exist" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.936666 4985 scope.go:117] "RemoveContainer" containerID="90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc" Jan 27 09:11:03 crc kubenswrapper[4985]: E0127 09:11:03.938851 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc\": container with ID starting with 90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc not found: ID does not exist" containerID="90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.938926 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc"} err="failed to get container status \"90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc\": rpc error: code = NotFound desc = could not find container \"90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc\": container with ID starting with 90e2ef79a122e12e80eeb94c0701f25faf9ec3f4ec503dc977c8f81e011c80bc not found: ID does not exist" Jan 27 09:11:03 crc kubenswrapper[4985]: I0127 09:11:03.948119 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-svzqc"] Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.104975 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.228123 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.228479 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="init" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.228495 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="init" Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.228525 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="dnsmasq-dns" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.228532 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="dnsmasq-dns" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.228705 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" containerName="dnsmasq-dns" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.233684 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.237454 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.237736 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.237772 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.237894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gqfkh" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.251799 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.287343 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.356763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4dd\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-kube-api-access-7d4dd\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.357132 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50364737-e2dc-4bd7-ba5a-97f39e232236-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.357245 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.357342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.357460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-cache\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.357588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-lock\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458228 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh457\" (UniqueName: \"kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457\") pod \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458280 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb\") pod \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config\") pod \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458482 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc\") pod \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\" (UID: \"cf1cd7b0-ecf5-445e-8961-7e92b4130a28\") " Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4dd\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-kube-api-access-7d4dd\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50364737-e2dc-4bd7-ba5a-97f39e232236-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458890 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458932 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-cache\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.458952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-lock\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.459395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-lock\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.459502 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.459533 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.459573 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:04.959556716 +0000 UTC m=+1049.250651557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.460097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50364737-e2dc-4bd7-ba5a-97f39e232236-cache\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.460385 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.464124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50364737-e2dc-4bd7-ba5a-97f39e232236-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.465765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457" (OuterVolumeSpecName: "kube-api-access-zh457") pod "cf1cd7b0-ecf5-445e-8961-7e92b4130a28" (UID: "cf1cd7b0-ecf5-445e-8961-7e92b4130a28"). InnerVolumeSpecName "kube-api-access-zh457". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.471328 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aede4b4-2eaf-445c-9c8c-5cc8839d9609" path="/var/lib/kubelet/pods/1aede4b4-2eaf-445c-9c8c-5cc8839d9609/volumes" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.472292 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99c676e-d269-4d8a-a67a-71d591467b7c" path="/var/lib/kubelet/pods/d99c676e-d269-4d8a-a67a-71d591467b7c/volumes" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.479604 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4dd\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-kube-api-access-7d4dd\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.489086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.501143 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf1cd7b0-ecf5-445e-8961-7e92b4130a28" (UID: "cf1cd7b0-ecf5-445e-8961-7e92b4130a28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.508526 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf1cd7b0-ecf5-445e-8961-7e92b4130a28" (UID: "cf1cd7b0-ecf5-445e-8961-7e92b4130a28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.509574 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config" (OuterVolumeSpecName: "config") pod "cf1cd7b0-ecf5-445e-8961-7e92b4130a28" (UID: "cf1cd7b0-ecf5-445e-8961-7e92b4130a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.560990 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.561026 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh457\" (UniqueName: \"kubernetes.io/projected/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-kube-api-access-zh457\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.561037 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.561049 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1cd7b0-ecf5-445e-8961-7e92b4130a28-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.890014 4985 generic.go:334] "Generic (PLEG): container finished" podID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerID="7b484b12b47ba2a609743adbb990faeb6d438b9b3504da74864105562d6d2e13" exitCode=0 Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.890095 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" event={"ID":"dd86fe7b-977d-481c-bf72-c651287d4ca9","Type":"ContainerDied","Data":"7b484b12b47ba2a609743adbb990faeb6d438b9b3504da74864105562d6d2e13"} Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.890122 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" event={"ID":"dd86fe7b-977d-481c-bf72-c651287d4ca9","Type":"ContainerStarted","Data":"899651292b256eb0062ee10727f28805fc7ef2aee8915c6906d4080aa03c5e1b"} Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.894112 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" event={"ID":"617061c6-cd75-4259-979e-60b51d0de147","Type":"ContainerStarted","Data":"19298a5f384ad4d8104f24dd23f6e5242ddfbea565a2720ddfad1d6921de8bc4"} Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.894326 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.895686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-7rrjk" event={"ID":"cf1cd7b0-ecf5-445e-8961-7e92b4130a28","Type":"ContainerDied","Data":"9be95473755512e1f047c8b3b8aa0c26cd9212f53683cfe961673547c7041f32"} Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.895727 4985 scope.go:117] "RemoveContainer" containerID="3a5d681a60ff376ad138e69e7b1ce49aa8bcc483bdd173c277e7559cfa08ace1" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.895790 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-7rrjk" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.931679 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" podStartSLOduration=3.931658925 podStartE2EDuration="3.931658925s" podCreationTimestamp="2026-01-27 09:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:04.930922585 +0000 UTC m=+1049.222017426" watchObservedRunningTime="2026-01-27 09:11:04.931658925 +0000 UTC m=+1049.222753776" Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.967769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.968427 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.968463 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: E0127 09:11:04.968505 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:05.968489374 +0000 UTC m=+1050.259584205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:04 crc kubenswrapper[4985]: I0127 09:11:04.995433 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.004020 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-7rrjk"] Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.904111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49843c3e-2aeb-43e9-8041-6f212a7fcc7c","Type":"ContainerStarted","Data":"17921a3379fca3b18165d9c3059013a43ddc351ddec6c23e786b036053d7f0a3"} Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.904741 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49843c3e-2aeb-43e9-8041-6f212a7fcc7c","Type":"ContainerStarted","Data":"a55d060a915287cb2635a7f5219cfa2d23c9696d05a371ddfdcd087c89fcb987"} Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.904788 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.908968 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" event={"ID":"dd86fe7b-977d-481c-bf72-c651287d4ca9","Type":"ContainerStarted","Data":"4aa54a66c75ee82da6dcdbdbf76f293799fd6714929d63fbd73fc6d57a111a10"} Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.961484 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.352996216 podStartE2EDuration="3.961468187s" podCreationTimestamp="2026-01-27 09:11:02 +0000 UTC" firstStartedPulling="2026-01-27 09:11:03.34130113 +0000 UTC m=+1047.632395971" lastFinishedPulling="2026-01-27 09:11:04.949773111 +0000 UTC m=+1049.240867942" observedRunningTime="2026-01-27 09:11:05.937526171 +0000 UTC m=+1050.228621012" watchObservedRunningTime="2026-01-27 09:11:05.961468187 +0000 UTC m=+1050.252563028" Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.963267 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podStartSLOduration=2.963260966 podStartE2EDuration="2.963260966s" podCreationTimestamp="2026-01-27 09:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:05.959446062 +0000 UTC m=+1050.250540903" watchObservedRunningTime="2026-01-27 09:11:05.963260966 +0000 UTC m=+1050.254355797" Jan 27 09:11:05 crc kubenswrapper[4985]: I0127 09:11:05.985412 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:05 crc kubenswrapper[4985]: E0127 09:11:05.985636 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:05 crc kubenswrapper[4985]: E0127 09:11:05.985669 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:05 crc kubenswrapper[4985]: E0127 09:11:05.985727 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:07.985709171 +0000 UTC m=+1052.276804012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:06 crc kubenswrapper[4985]: I0127 09:11:06.462140 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" path="/var/lib/kubelet/pods/cf1cd7b0-ecf5-445e-8961-7e92b4130a28/volumes" Jan 27 09:11:06 crc kubenswrapper[4985]: I0127 09:11:06.918030 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:07 crc kubenswrapper[4985]: I0127 09:11:07.542598 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 09:11:07 crc kubenswrapper[4985]: I0127 09:11:07.608481 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.022743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:08 crc kubenswrapper[4985]: E0127 09:11:08.022911 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:08 crc kubenswrapper[4985]: E0127 09:11:08.023112 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:08 crc kubenswrapper[4985]: E0127 09:11:08.023162 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:12.023144578 +0000 UTC m=+1056.314239419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.086883 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gbqhb"] Jan 27 09:11:08 crc kubenswrapper[4985]: E0127 09:11:08.087238 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" containerName="init" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.087256 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" containerName="init" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.087396 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1cd7b0-ecf5-445e-8961-7e92b4130a28" containerName="init" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.087916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.090892 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.090963 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.112459 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.115163 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gbqhb"] Jan 27 09:11:08 crc kubenswrapper[4985]: E0127 09:11:08.116753 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-7psgz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-7psgz ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-gbqhb" podUID="f220105a-0f88-4598-828f-a49942bef9dc" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.142689 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nzqqd"] Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.144054 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.158114 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nzqqd"] Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.164829 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gbqhb"] Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227791 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227885 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227915 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psgz\" (UniqueName: \"kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.227981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228017 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228035 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228093 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228117 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228161 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlqlx\" (UniqueName: \"kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228185 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.228232 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.329874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.329967 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330007 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330099 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psgz\" (UniqueName: \"kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330138 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330208 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330250 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330276 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlqlx\" (UniqueName: \"kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.330383 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.331358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.331888 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.332301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.333069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.333831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.334344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.340582 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.344260 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.347504 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.350077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.350108 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.364095 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.364804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psgz\" (UniqueName: \"kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz\") pod \"swift-ring-rebalance-gbqhb\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.374210 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlqlx\" (UniqueName: \"kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx\") pod \"swift-ring-rebalance-nzqqd\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.503280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.932412 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.945164 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:08 crc kubenswrapper[4985]: I0127 09:11:08.957918 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nzqqd"] Jan 27 09:11:08 crc kubenswrapper[4985]: W0127 09:11:08.966922 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c0c0d06_870e_469e_bacf_dc5aa8af9d3b.slice/crio-c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a WatchSource:0}: Error finding container c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a: Status 404 returned error can't find the container with id c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.047342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.048908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.049017 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7psgz\" (UniqueName: \"kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.049075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.049125 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.049201 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.049227 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices\") pod \"f220105a-0f88-4598-828f-a49942bef9dc\" (UID: \"f220105a-0f88-4598-828f-a49942bef9dc\") " Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.050854 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.050874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.051021 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts" (OuterVolumeSpecName: "scripts") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.055758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.056329 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.057048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz" (OuterVolumeSpecName: "kube-api-access-7psgz") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "kube-api-access-7psgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.058033 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f220105a-0f88-4598-828f-a49942bef9dc" (UID: "f220105a-0f88-4598-828f-a49942bef9dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151864 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7psgz\" (UniqueName: \"kubernetes.io/projected/f220105a-0f88-4598-828f-a49942bef9dc-kube-api-access-7psgz\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151913 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f220105a-0f88-4598-828f-a49942bef9dc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151924 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151936 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151945 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f220105a-0f88-4598-828f-a49942bef9dc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151955 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.151965 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f220105a-0f88-4598-828f-a49942bef9dc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.395009 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.395489 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.484273 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.610074 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j5r52"] Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.611723 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.614130 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.624529 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j5r52"] Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.762849 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.762940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2kt\" (UniqueName: \"kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.864497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.864601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d2kt\" (UniqueName: \"kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.865789 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.903705 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d2kt\" (UniqueName: \"kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt\") pod \"root-account-create-update-j5r52\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.936595 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.944230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nzqqd" event={"ID":"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b","Type":"ContainerStarted","Data":"c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a"} Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.944296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gbqhb" Jan 27 09:11:09 crc kubenswrapper[4985]: I0127 09:11:09.997152 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gbqhb"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.005997 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gbqhb"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.042338 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.469050 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f220105a-0f88-4598-828f-a49942bef9dc" path="/var/lib/kubelet/pods/f220105a-0f88-4598-828f-a49942bef9dc/volumes" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.476272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j5r52"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.798230 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zb58b"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.800058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.807558 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zb58b"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.891562 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkgv\" (UniqueName: \"kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.893072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.923486 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-81c0-account-create-update-p7n55"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.924761 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.927617 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.942328 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-81c0-account-create-update-p7n55"] Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.994857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.995471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.995536 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtqf\" (UniqueName: \"kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.995669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkgv\" (UniqueName: \"kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:10 crc kubenswrapper[4985]: I0127 09:11:10.996525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.025927 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkgv\" (UniqueName: \"kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv\") pod \"keystone-db-create-zb58b\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.097182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.097235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtqf\" (UniqueName: \"kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.098941 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.128037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.136531 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtqf\" (UniqueName: \"kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf\") pod \"keystone-81c0-account-create-update-p7n55\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.148129 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7h2cs"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.149284 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.171986 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7h2cs"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.199585 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.199856 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzms\" (UniqueName: \"kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.239966 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.294487 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-107f-account-create-update-jv8z8"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.295867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.301870 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.303122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.303206 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzms\" (UniqueName: \"kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.303972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.319234 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-107f-account-create-update-jv8z8"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.383445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzms\" (UniqueName: \"kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms\") pod \"placement-db-create-7h2cs\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.404741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvgb\" (UniqueName: \"kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.404882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.499475 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.506439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.506555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvgb\" (UniqueName: \"kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.507782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.526687 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mhl9w"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.542923 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b30d-account-create-update-4h6gn"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.543903 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.545130 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvgb\" (UniqueName: \"kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb\") pod \"placement-107f-account-create-update-jv8z8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.545462 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.547483 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.550630 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mhl9w"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.560775 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b30d-account-create-update-4h6gn"] Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.607941 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.608038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.608230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlbv\" (UniqueName: \"kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.608367 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsh2\" (UniqueName: \"kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.633374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.710911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.711032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.711095 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlbv\" (UniqueName: \"kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.711191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbsh2\" (UniqueName: \"kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.712363 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.712491 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.728158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbsh2\" (UniqueName: \"kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2\") pod \"glance-b30d-account-create-update-4h6gn\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.731768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.735558 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlbv\" (UniqueName: \"kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv\") pod \"glance-db-create-mhl9w\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.827875 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.827945 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.827993 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.828738 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.828801 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b" gracePeriod=600 Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.905087 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:11 crc kubenswrapper[4985]: I0127 09:11:11.913133 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:12 crc kubenswrapper[4985]: W0127 09:11:12.047159 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe7c41c_72d2_4f20_9098_c0c7722a8ef8.slice/crio-6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44 WatchSource:0}: Error finding container 6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44: Status 404 returned error can't find the container with id 6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44 Jan 27 09:11:12 crc kubenswrapper[4985]: I0127 09:11:12.117571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:12 crc kubenswrapper[4985]: E0127 09:11:12.117841 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:12 crc kubenswrapper[4985]: E0127 09:11:12.117870 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:12 crc kubenswrapper[4985]: E0127 09:11:12.117923 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:20.117904827 +0000 UTC m=+1064.408999668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:12 crc kubenswrapper[4985]: I0127 09:11:12.990863 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b" exitCode=0 Jan 27 09:11:12 crc kubenswrapper[4985]: I0127 09:11:12.990950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b"} Jan 27 09:11:12 crc kubenswrapper[4985]: I0127 09:11:12.991013 4985 scope.go:117] "RemoveContainer" containerID="0e4881b17c436c59c3960f9c1b311810a8744ae3641df94bf63c98dbfa41b302" Jan 27 09:11:12 crc kubenswrapper[4985]: I0127 09:11:12.995084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5r52" event={"ID":"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8","Type":"ContainerStarted","Data":"6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44"} Jan 27 09:11:13 crc kubenswrapper[4985]: I0127 09:11:13.595669 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:11:13 crc kubenswrapper[4985]: I0127 09:11:13.645377 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:13 crc kubenswrapper[4985]: I0127 09:11:13.645633 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="dnsmasq-dns" containerID="cri-o://19298a5f384ad4d8104f24dd23f6e5242ddfbea565a2720ddfad1d6921de8bc4" gracePeriod=10 Jan 27 09:11:13 crc kubenswrapper[4985]: I0127 09:11:13.855677 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-107f-account-create-update-jv8z8"] Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.006124 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5r52" event={"ID":"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8","Type":"ContainerStarted","Data":"5df0690db1a644bbc456cc4dac180784a7c72caeaa472edbdd12688c98a8acfc"} Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.012989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f"} Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.016365 4985 generic.go:334] "Generic (PLEG): container finished" podID="617061c6-cd75-4259-979e-60b51d0de147" containerID="19298a5f384ad4d8104f24dd23f6e5242ddfbea565a2720ddfad1d6921de8bc4" exitCode=0 Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.016446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" event={"ID":"617061c6-cd75-4259-979e-60b51d0de147","Type":"ContainerDied","Data":"19298a5f384ad4d8104f24dd23f6e5242ddfbea565a2720ddfad1d6921de8bc4"} Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.021086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-107f-account-create-update-jv8z8" event={"ID":"1107e86e-6b40-4c4a-94bb-c478cf5954c8","Type":"ContainerStarted","Data":"c16e3148dc3e98183abf9fc44b58975d05dad75d09058e36cc741372301e651b"} Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.023471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nzqqd" event={"ID":"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b","Type":"ContainerStarted","Data":"9b826e821127e87b8d91b69dbdf3452f8028331d5abb5d94fb4d8a90057f1715"} Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.027643 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j5r52" podStartSLOduration=5.027627494 podStartE2EDuration="5.027627494s" podCreationTimestamp="2026-01-27 09:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:14.026207155 +0000 UTC m=+1058.317301996" watchObservedRunningTime="2026-01-27 09:11:14.027627494 +0000 UTC m=+1058.318722345" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.055781 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nzqqd" podStartSLOduration=1.653769827 podStartE2EDuration="6.055758815s" podCreationTimestamp="2026-01-27 09:11:08 +0000 UTC" firstStartedPulling="2026-01-27 09:11:08.968488476 +0000 UTC m=+1053.259583327" lastFinishedPulling="2026-01-27 09:11:13.370477474 +0000 UTC m=+1057.661572315" observedRunningTime="2026-01-27 09:11:14.050582513 +0000 UTC m=+1058.341677364" watchObservedRunningTime="2026-01-27 09:11:14.055758815 +0000 UTC m=+1058.346853656" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.347614 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zb58b"] Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.419034 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-81c0-account-create-update-p7n55"] Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.494012 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b30d-account-create-update-4h6gn"] Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.494048 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mhl9w"] Jan 27 09:11:14 crc kubenswrapper[4985]: W0127 09:11:14.541731 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ff42621_0c5e_44bb_ba09_a536658065b8.slice/crio-39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36 WatchSource:0}: Error finding container 39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36: Status 404 returned error can't find the container with id 39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36 Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.586535 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7h2cs"] Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.611105 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.686919 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb\") pod \"617061c6-cd75-4259-979e-60b51d0de147\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.687823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb\") pod \"617061c6-cd75-4259-979e-60b51d0de147\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.687980 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc\") pod \"617061c6-cd75-4259-979e-60b51d0de147\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.688069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwst8\" (UniqueName: \"kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8\") pod \"617061c6-cd75-4259-979e-60b51d0de147\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.688147 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config\") pod \"617061c6-cd75-4259-979e-60b51d0de147\" (UID: \"617061c6-cd75-4259-979e-60b51d0de147\") " Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.705815 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8" (OuterVolumeSpecName: "kube-api-access-bwst8") pod "617061c6-cd75-4259-979e-60b51d0de147" (UID: "617061c6-cd75-4259-979e-60b51d0de147"). InnerVolumeSpecName "kube-api-access-bwst8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.790042 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwst8\" (UniqueName: \"kubernetes.io/projected/617061c6-cd75-4259-979e-60b51d0de147-kube-api-access-bwst8\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.971756 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "617061c6-cd75-4259-979e-60b51d0de147" (UID: "617061c6-cd75-4259-979e-60b51d0de147"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.989942 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config" (OuterVolumeSpecName: "config") pod "617061c6-cd75-4259-979e-60b51d0de147" (UID: "617061c6-cd75-4259-979e-60b51d0de147"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.993064 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "617061c6-cd75-4259-979e-60b51d0de147" (UID: "617061c6-cd75-4259-979e-60b51d0de147"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.994406 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.994599 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.994697 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:14 crc kubenswrapper[4985]: I0127 09:11:14.999578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "617061c6-cd75-4259-979e-60b51d0de147" (UID: "617061c6-cd75-4259-979e-60b51d0de147"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.047179 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7h2cs" event={"ID":"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e","Type":"ContainerStarted","Data":"e5f2fa483b21fcf5e4d3910af59a3e8b975ce375b10a09febf7d1bf1aa3f9cf3"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.047242 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7h2cs" event={"ID":"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e","Type":"ContainerStarted","Data":"4c48c40919817b71e016bf00c3915abf2e41986b54117375f518c7d6fbb5ce68"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.049717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhl9w" event={"ID":"7ff42621-0c5e-44bb-ba09-a536658065b8","Type":"ContainerStarted","Data":"54d6cb23c01b8d44de0f3b97d42f8b10f6d796a3c4be553ec1bb39ca8b3d6422"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.049758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhl9w" event={"ID":"7ff42621-0c5e-44bb-ba09-a536658065b8","Type":"ContainerStarted","Data":"39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.052611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zb58b" event={"ID":"04009233-f269-4fda-b5ba-4a806e56b4ea","Type":"ContainerStarted","Data":"bb6eb1b31ffe8ac30fa75e4e6e13ad5b27db9a8b70fbaad36cdbfa5f355fa90a"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.052650 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zb58b" event={"ID":"04009233-f269-4fda-b5ba-4a806e56b4ea","Type":"ContainerStarted","Data":"cdbd0120670ce7e7232fb830e25e948d46d9903732e7db5aa4facb24e49dd83a"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.055460 4985 generic.go:334] "Generic (PLEG): container finished" podID="1107e86e-6b40-4c4a-94bb-c478cf5954c8" containerID="f91828d746db2c81d42da01dd12cd5522be6c09307536f3aac8e6a6611a8cf2a" exitCode=0 Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.055538 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-107f-account-create-update-jv8z8" event={"ID":"1107e86e-6b40-4c4a-94bb-c478cf5954c8","Type":"ContainerDied","Data":"f91828d746db2c81d42da01dd12cd5522be6c09307536f3aac8e6a6611a8cf2a"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.057806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" event={"ID":"617061c6-cd75-4259-979e-60b51d0de147","Type":"ContainerDied","Data":"d1f86612d641f3867d569958ad0dc2993328734d4cd9b1a12f176270f9d254c3"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.057840 4985 scope.go:117] "RemoveContainer" containerID="19298a5f384ad4d8104f24dd23f6e5242ddfbea565a2720ddfad1d6921de8bc4" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.057913 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-v2s76" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.061490 4985 generic.go:334] "Generic (PLEG): container finished" podID="3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" containerID="5df0690db1a644bbc456cc4dac180784a7c72caeaa472edbdd12688c98a8acfc" exitCode=0 Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.061590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5r52" event={"ID":"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8","Type":"ContainerDied","Data":"5df0690db1a644bbc456cc4dac180784a7c72caeaa472edbdd12688c98a8acfc"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.064006 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b30d-account-create-update-4h6gn" event={"ID":"744384a9-ac5a-46ef-a549-37046198fecf","Type":"ContainerStarted","Data":"6c600e08e3a754feac595e009de9a80fa06eb08943cd147f17ba979e374cee78"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.064054 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b30d-account-create-update-4h6gn" event={"ID":"744384a9-ac5a-46ef-a549-37046198fecf","Type":"ContainerStarted","Data":"1dcdf49ec8b9bdb9ae381526aa1b44e7bd364ffd2b5e55d544cf2324ef30cbb2"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.069961 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-81c0-account-create-update-p7n55" event={"ID":"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e","Type":"ContainerStarted","Data":"5319b93bd008daec3fbffb1765044db6ef4d43645832ec02d8a40f0808e89dd3"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.070130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-81c0-account-create-update-p7n55" event={"ID":"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e","Type":"ContainerStarted","Data":"b05d9b6f77b7319c8967b0d3e04f27c355c3ccbeb1d1a0e6170e310d39a72177"} Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.086992 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-7h2cs" podStartSLOduration=4.086975386 podStartE2EDuration="4.086975386s" podCreationTimestamp="2026-01-27 09:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:15.082881493 +0000 UTC m=+1059.373976334" watchObservedRunningTime="2026-01-27 09:11:15.086975386 +0000 UTC m=+1059.378070227" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.096612 4985 scope.go:117] "RemoveContainer" containerID="0dc21ddec21af989064d8a3eded6bc55fcbd01ca13d901a4914c88f99f5b03eb" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.097852 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/617061c6-cd75-4259-979e-60b51d0de147-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.114750 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b30d-account-create-update-4h6gn" podStartSLOduration=4.1147137560000004 podStartE2EDuration="4.114713756s" podCreationTimestamp="2026-01-27 09:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:15.105248136 +0000 UTC m=+1059.396342977" watchObservedRunningTime="2026-01-27 09:11:15.114713756 +0000 UTC m=+1059.405808597" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.130671 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.184749 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-v2s76"] Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.205533 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zb58b" podStartSLOduration=5.205492844 podStartE2EDuration="5.205492844s" podCreationTimestamp="2026-01-27 09:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:15.148173803 +0000 UTC m=+1059.439268644" watchObservedRunningTime="2026-01-27 09:11:15.205492844 +0000 UTC m=+1059.496587685" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.235593 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-81c0-account-create-update-p7n55" podStartSLOduration=5.235561298 podStartE2EDuration="5.235561298s" podCreationTimestamp="2026-01-27 09:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:15.225477901 +0000 UTC m=+1059.516572742" watchObservedRunningTime="2026-01-27 09:11:15.235561298 +0000 UTC m=+1059.526656159" Jan 27 09:11:15 crc kubenswrapper[4985]: I0127 09:11:15.269262 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mhl9w" podStartSLOduration=4.269244231 podStartE2EDuration="4.269244231s" podCreationTimestamp="2026-01-27 09:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:15.247230037 +0000 UTC m=+1059.538324878" watchObservedRunningTime="2026-01-27 09:11:15.269244231 +0000 UTC m=+1059.560339072" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.084179 4985 generic.go:334] "Generic (PLEG): container finished" podID="744384a9-ac5a-46ef-a549-37046198fecf" containerID="6c600e08e3a754feac595e009de9a80fa06eb08943cd147f17ba979e374cee78" exitCode=0 Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.084279 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b30d-account-create-update-4h6gn" event={"ID":"744384a9-ac5a-46ef-a549-37046198fecf","Type":"ContainerDied","Data":"6c600e08e3a754feac595e009de9a80fa06eb08943cd147f17ba979e374cee78"} Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.088481 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" containerID="5319b93bd008daec3fbffb1765044db6ef4d43645832ec02d8a40f0808e89dd3" exitCode=0 Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.088691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-81c0-account-create-update-p7n55" event={"ID":"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e","Type":"ContainerDied","Data":"5319b93bd008daec3fbffb1765044db6ef4d43645832ec02d8a40f0808e89dd3"} Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.090654 4985 generic.go:334] "Generic (PLEG): container finished" podID="ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" containerID="e5f2fa483b21fcf5e4d3910af59a3e8b975ce375b10a09febf7d1bf1aa3f9cf3" exitCode=0 Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.090729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7h2cs" event={"ID":"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e","Type":"ContainerDied","Data":"e5f2fa483b21fcf5e4d3910af59a3e8b975ce375b10a09febf7d1bf1aa3f9cf3"} Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.093820 4985 generic.go:334] "Generic (PLEG): container finished" podID="7ff42621-0c5e-44bb-ba09-a536658065b8" containerID="54d6cb23c01b8d44de0f3b97d42f8b10f6d796a3c4be553ec1bb39ca8b3d6422" exitCode=0 Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.093855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhl9w" event={"ID":"7ff42621-0c5e-44bb-ba09-a536658065b8","Type":"ContainerDied","Data":"54d6cb23c01b8d44de0f3b97d42f8b10f6d796a3c4be553ec1bb39ca8b3d6422"} Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.096567 4985 generic.go:334] "Generic (PLEG): container finished" podID="04009233-f269-4fda-b5ba-4a806e56b4ea" containerID="bb6eb1b31ffe8ac30fa75e4e6e13ad5b27db9a8b70fbaad36cdbfa5f355fa90a" exitCode=0 Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.096642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zb58b" event={"ID":"04009233-f269-4fda-b5ba-4a806e56b4ea","Type":"ContainerDied","Data":"bb6eb1b31ffe8ac30fa75e4e6e13ad5b27db9a8b70fbaad36cdbfa5f355fa90a"} Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.478405 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617061c6-cd75-4259-979e-60b51d0de147" path="/var/lib/kubelet/pods/617061c6-cd75-4259-979e-60b51d0de147/volumes" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.478822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.544410 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts\") pod \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.544726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2kt\" (UniqueName: \"kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt\") pod \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\" (UID: \"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8\") " Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.545431 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" (UID: "3fe7c41c-72d2-4f20-9098-c0c7722a8ef8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.552548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt" (OuterVolumeSpecName: "kube-api-access-5d2kt") pod "3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" (UID: "3fe7c41c-72d2-4f20-9098-c0c7722a8ef8"). InnerVolumeSpecName "kube-api-access-5d2kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.600861 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.647258 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d2kt\" (UniqueName: \"kubernetes.io/projected/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-kube-api-access-5d2kt\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.647309 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.748108 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkvgb\" (UniqueName: \"kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb\") pod \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.748233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts\") pod \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\" (UID: \"1107e86e-6b40-4c4a-94bb-c478cf5954c8\") " Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.748736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1107e86e-6b40-4c4a-94bb-c478cf5954c8" (UID: "1107e86e-6b40-4c4a-94bb-c478cf5954c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.750869 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb" (OuterVolumeSpecName: "kube-api-access-nkvgb") pod "1107e86e-6b40-4c4a-94bb-c478cf5954c8" (UID: "1107e86e-6b40-4c4a-94bb-c478cf5954c8"). InnerVolumeSpecName "kube-api-access-nkvgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.850372 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkvgb\" (UniqueName: \"kubernetes.io/projected/1107e86e-6b40-4c4a-94bb-c478cf5954c8-kube-api-access-nkvgb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:16 crc kubenswrapper[4985]: I0127 09:11:16.850408 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1107e86e-6b40-4c4a-94bb-c478cf5954c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.106277 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-107f-account-create-update-jv8z8" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.107160 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-107f-account-create-update-jv8z8" event={"ID":"1107e86e-6b40-4c4a-94bb-c478cf5954c8","Type":"ContainerDied","Data":"c16e3148dc3e98183abf9fc44b58975d05dad75d09058e36cc741372301e651b"} Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.107227 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16e3148dc3e98183abf9fc44b58975d05dad75d09058e36cc741372301e651b" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.108953 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5r52" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.109691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5r52" event={"ID":"3fe7c41c-72d2-4f20-9098-c0c7722a8ef8","Type":"ContainerDied","Data":"6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44"} Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.109762 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b37a2ee31e7378849909bc167da237fc169c5af99e6f1cf2b8773350f4b7f44" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.701831 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.764299 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.785011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts\") pod \"04009233-f269-4fda-b5ba-4a806e56b4ea\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.785146 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkkgv\" (UniqueName: \"kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv\") pod \"04009233-f269-4fda-b5ba-4a806e56b4ea\" (UID: \"04009233-f269-4fda-b5ba-4a806e56b4ea\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.786688 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04009233-f269-4fda-b5ba-4a806e56b4ea" (UID: "04009233-f269-4fda-b5ba-4a806e56b4ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.793893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv" (OuterVolumeSpecName: "kube-api-access-pkkgv") pod "04009233-f269-4fda-b5ba-4a806e56b4ea" (UID: "04009233-f269-4fda-b5ba-4a806e56b4ea"). InnerVolumeSpecName "kube-api-access-pkkgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.864208 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.871626 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.884815 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.889975 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04009233-f269-4fda-b5ba-4a806e56b4ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.890019 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkkgv\" (UniqueName: \"kubernetes.io/projected/04009233-f269-4fda-b5ba-4a806e56b4ea-kube-api-access-pkkgv\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.893751 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.990728 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts\") pod \"7ff42621-0c5e-44bb-ba09-a536658065b8\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.990842 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts\") pod \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.990945 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbsh2\" (UniqueName: \"kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2\") pod \"744384a9-ac5a-46ef-a549-37046198fecf\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.990978 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwlbv\" (UniqueName: \"kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv\") pod \"7ff42621-0c5e-44bb-ba09-a536658065b8\" (UID: \"7ff42621-0c5e-44bb-ba09-a536658065b8\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991092 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npzms\" (UniqueName: \"kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms\") pod \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts\") pod \"744384a9-ac5a-46ef-a549-37046198fecf\" (UID: \"744384a9-ac5a-46ef-a549-37046198fecf\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts\") pod \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\" (UID: \"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtqf\" (UniqueName: \"kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf\") pod \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\" (UID: \"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e\") " Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ff42621-0c5e-44bb-ba09-a536658065b8" (UID: "7ff42621-0c5e-44bb-ba09-a536658065b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.991456 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" (UID: "0c5fea9a-94ec-4d40-a9ce-e8245a49f14e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.992055 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff42621-0c5e-44bb-ba09-a536658065b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.992085 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.992836 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "744384a9-ac5a-46ef-a549-37046198fecf" (UID: "744384a9-ac5a-46ef-a549-37046198fecf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:17 crc kubenswrapper[4985]: I0127 09:11:17.992910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" (UID: "ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.000754 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf" (OuterVolumeSpecName: "kube-api-access-ddtqf") pod "0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" (UID: "0c5fea9a-94ec-4d40-a9ce-e8245a49f14e"). InnerVolumeSpecName "kube-api-access-ddtqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.001302 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2" (OuterVolumeSpecName: "kube-api-access-xbsh2") pod "744384a9-ac5a-46ef-a549-37046198fecf" (UID: "744384a9-ac5a-46ef-a549-37046198fecf"). InnerVolumeSpecName "kube-api-access-xbsh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.002227 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms" (OuterVolumeSpecName: "kube-api-access-npzms") pod "ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" (UID: "ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e"). InnerVolumeSpecName "kube-api-access-npzms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.003817 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j5r52"] Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.015805 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv" (OuterVolumeSpecName: "kube-api-access-nwlbv") pod "7ff42621-0c5e-44bb-ba09-a536658065b8" (UID: "7ff42621-0c5e-44bb-ba09-a536658065b8"). InnerVolumeSpecName "kube-api-access-nwlbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.025363 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j5r52"] Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090466 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rvjmw"] Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090867 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744384a9-ac5a-46ef-a549-37046198fecf" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090889 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="744384a9-ac5a-46ef-a549-37046198fecf" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090907 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090913 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090923 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff42621-0c5e-44bb-ba09-a536658065b8" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090930 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff42621-0c5e-44bb-ba09-a536658065b8" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090942 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090965 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1107e86e-6b40-4c4a-94bb-c478cf5954c8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090975 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1107e86e-6b40-4c4a-94bb-c478cf5954c8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.090990 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="dnsmasq-dns" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.090996 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="dnsmasq-dns" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.091006 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04009233-f269-4fda-b5ba-4a806e56b4ea" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091013 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="04009233-f269-4fda-b5ba-4a806e56b4ea" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.091031 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091039 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: E0127 09:11:18.091053 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="init" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091060 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="init" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091234 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="617061c6-cd75-4259-979e-60b51d0de147" containerName="dnsmasq-dns" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091247 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="744384a9-ac5a-46ef-a549-37046198fecf" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091258 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1107e86e-6b40-4c4a-94bb-c478cf5954c8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091270 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff42621-0c5e-44bb-ba09-a536658065b8" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091281 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091295 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091309 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" containerName="mariadb-account-create-update" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.091320 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="04009233-f269-4fda-b5ba-4a806e56b4ea" containerName="mariadb-database-create" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.092092 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093395 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbsh2\" (UniqueName: \"kubernetes.io/projected/744384a9-ac5a-46ef-a549-37046198fecf-kube-api-access-xbsh2\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093438 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwlbv\" (UniqueName: \"kubernetes.io/projected/7ff42621-0c5e-44bb-ba09-a536658065b8-kube-api-access-nwlbv\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093451 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npzms\" (UniqueName: \"kubernetes.io/projected/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-kube-api-access-npzms\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093466 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744384a9-ac5a-46ef-a549-37046198fecf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093479 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.093491 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtqf\" (UniqueName: \"kubernetes.io/projected/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e-kube-api-access-ddtqf\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.099165 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.101536 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rvjmw"] Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.131237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b30d-account-create-update-4h6gn" event={"ID":"744384a9-ac5a-46ef-a549-37046198fecf","Type":"ContainerDied","Data":"1dcdf49ec8b9bdb9ae381526aa1b44e7bd364ffd2b5e55d544cf2324ef30cbb2"} Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.131363 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dcdf49ec8b9bdb9ae381526aa1b44e7bd364ffd2b5e55d544cf2324ef30cbb2" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.131299 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b30d-account-create-update-4h6gn" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.133343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-81c0-account-create-update-p7n55" event={"ID":"0c5fea9a-94ec-4d40-a9ce-e8245a49f14e","Type":"ContainerDied","Data":"b05d9b6f77b7319c8967b0d3e04f27c355c3ccbeb1d1a0e6170e310d39a72177"} Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.133385 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b05d9b6f77b7319c8967b0d3e04f27c355c3ccbeb1d1a0e6170e310d39a72177" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.133355 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-81c0-account-create-update-p7n55" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.134812 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zb58b" event={"ID":"04009233-f269-4fda-b5ba-4a806e56b4ea","Type":"ContainerDied","Data":"cdbd0120670ce7e7232fb830e25e948d46d9903732e7db5aa4facb24e49dd83a"} Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.134839 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdbd0120670ce7e7232fb830e25e948d46d9903732e7db5aa4facb24e49dd83a" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.134897 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zb58b" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.138913 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7h2cs" event={"ID":"ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e","Type":"ContainerDied","Data":"4c48c40919817b71e016bf00c3915abf2e41986b54117375f518c7d6fbb5ce68"} Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.138948 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c48c40919817b71e016bf00c3915abf2e41986b54117375f518c7d6fbb5ce68" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.138968 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7h2cs" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.145268 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhl9w" event={"ID":"7ff42621-0c5e-44bb-ba09-a536658065b8","Type":"ContainerDied","Data":"39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36"} Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.145331 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39987c4510a371992f06643125a640adcdb6030d3f73bbbaa8a6e40eb571bf36" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.145427 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhl9w" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.195020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.195218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg76j\" (UniqueName: \"kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.300189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg76j\" (UniqueName: \"kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.300592 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.301261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.324621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg76j\" (UniqueName: \"kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j\") pod \"root-account-create-update-rvjmw\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.411978 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.464396 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe7c41c-72d2-4f20-9098-c0c7722a8ef8" path="/var/lib/kubelet/pods/3fe7c41c-72d2-4f20-9098-c0c7722a8ef8/volumes" Jan 27 09:11:18 crc kubenswrapper[4985]: I0127 09:11:18.930672 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rvjmw"] Jan 27 09:11:18 crc kubenswrapper[4985]: W0127 09:11:18.938432 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6148d0b5_adcf_4096_9499_eae7a0c703f0.slice/crio-6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d WatchSource:0}: Error finding container 6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d: Status 404 returned error can't find the container with id 6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d Jan 27 09:11:19 crc kubenswrapper[4985]: I0127 09:11:19.159268 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rvjmw" event={"ID":"6148d0b5-adcf-4096-9499-eae7a0c703f0","Type":"ContainerStarted","Data":"9706e1d4658a44c74fff4026a1483811e63023e4aa43d6a7616de6b6caa0e1ba"} Jan 27 09:11:19 crc kubenswrapper[4985]: I0127 09:11:19.159320 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rvjmw" event={"ID":"6148d0b5-adcf-4096-9499-eae7a0c703f0","Type":"ContainerStarted","Data":"6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d"} Jan 27 09:11:19 crc kubenswrapper[4985]: I0127 09:11:19.178976 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rvjmw" podStartSLOduration=1.178957699 podStartE2EDuration="1.178957699s" podCreationTimestamp="2026-01-27 09:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:19.174463205 +0000 UTC m=+1063.465558046" watchObservedRunningTime="2026-01-27 09:11:19.178957699 +0000 UTC m=+1063.470052540" Jan 27 09:11:20 crc kubenswrapper[4985]: I0127 09:11:20.140228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:20 crc kubenswrapper[4985]: E0127 09:11:20.140400 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 09:11:20 crc kubenswrapper[4985]: E0127 09:11:20.140682 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 09:11:20 crc kubenswrapper[4985]: E0127 09:11:20.140757 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift podName:50364737-e2dc-4bd7-ba5a-97f39e232236 nodeName:}" failed. No retries permitted until 2026-01-27 09:11:36.140729146 +0000 UTC m=+1080.431823987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift") pod "swift-storage-0" (UID: "50364737-e2dc-4bd7-ba5a-97f39e232236") : configmap "swift-ring-files" not found Jan 27 09:11:20 crc kubenswrapper[4985]: I0127 09:11:20.191440 4985 generic.go:334] "Generic (PLEG): container finished" podID="6148d0b5-adcf-4096-9499-eae7a0c703f0" containerID="9706e1d4658a44c74fff4026a1483811e63023e4aa43d6a7616de6b6caa0e1ba" exitCode=0 Jan 27 09:11:20 crc kubenswrapper[4985]: I0127 09:11:20.191587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rvjmw" event={"ID":"6148d0b5-adcf-4096-9499-eae7a0c703f0","Type":"ContainerDied","Data":"9706e1d4658a44c74fff4026a1483811e63023e4aa43d6a7616de6b6caa0e1ba"} Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.200352 4985 generic.go:334] "Generic (PLEG): container finished" podID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerID="b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e" exitCode=0 Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.200462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerDied","Data":"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e"} Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.204125 4985 generic.go:334] "Generic (PLEG): container finished" podID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerID="cea1414e3344dd8ffd89d82148d82d04e5425f1dd069adc9bd7855c688b77608" exitCode=0 Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.204189 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerDied","Data":"cea1414e3344dd8ffd89d82148d82d04e5425f1dd069adc9bd7855c688b77608"} Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.511476 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.569331 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg76j\" (UniqueName: \"kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j\") pod \"6148d0b5-adcf-4096-9499-eae7a0c703f0\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.569455 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts\") pod \"6148d0b5-adcf-4096-9499-eae7a0c703f0\" (UID: \"6148d0b5-adcf-4096-9499-eae7a0c703f0\") " Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.570194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6148d0b5-adcf-4096-9499-eae7a0c703f0" (UID: "6148d0b5-adcf-4096-9499-eae7a0c703f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.573838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j" (OuterVolumeSpecName: "kube-api-access-hg76j") pod "6148d0b5-adcf-4096-9499-eae7a0c703f0" (UID: "6148d0b5-adcf-4096-9499-eae7a0c703f0"). InnerVolumeSpecName "kube-api-access-hg76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.671435 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg76j\" (UniqueName: \"kubernetes.io/projected/6148d0b5-adcf-4096-9499-eae7a0c703f0-kube-api-access-hg76j\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.671479 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6148d0b5-adcf-4096-9499-eae7a0c703f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.816106 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tsdq2"] Jan 27 09:11:21 crc kubenswrapper[4985]: E0127 09:11:21.816549 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6148d0b5-adcf-4096-9499-eae7a0c703f0" containerName="mariadb-account-create-update" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.816569 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6148d0b5-adcf-4096-9499-eae7a0c703f0" containerName="mariadb-account-create-update" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.816761 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6148d0b5-adcf-4096-9499-eae7a0c703f0" containerName="mariadb-account-create-update" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.817245 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.819448 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tq9nh" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.819460 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.831082 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tsdq2"] Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.874186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.874283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.874475 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmc4\" (UniqueName: \"kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.874616 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.976470 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmc4\" (UniqueName: \"kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.976570 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.976637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.976678 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.981850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.983292 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.983440 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:21 crc kubenswrapper[4985]: I0127 09:11:21.993781 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmc4\" (UniqueName: \"kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4\") pod \"glance-db-sync-tsdq2\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.136233 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.217832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rvjmw" event={"ID":"6148d0b5-adcf-4096-9499-eae7a0c703f0","Type":"ContainerDied","Data":"6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d"} Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.218174 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e52997c5d0747717bab51e0a29f9c8b1a1f1b0189066a0e0e96f901a2e2529d" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.217850 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rvjmw" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.222538 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" containerID="9b826e821127e87b8d91b69dbdf3452f8028331d5abb5d94fb4d8a90057f1715" exitCode=0 Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.222614 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nzqqd" event={"ID":"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b","Type":"ContainerDied","Data":"9b826e821127e87b8d91b69dbdf3452f8028331d5abb5d94fb4d8a90057f1715"} Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.224725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerStarted","Data":"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54"} Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.225627 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.233913 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerStarted","Data":"4b1f967c83ed7b393f9fee284831f80d2118cee0c36a94006b08e047e2c83d7b"} Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.234169 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.280652 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.709582406 podStartE2EDuration="56.280632402s" podCreationTimestamp="2026-01-27 09:10:26 +0000 UTC" firstStartedPulling="2026-01-27 09:10:34.576197118 +0000 UTC m=+1018.867291959" lastFinishedPulling="2026-01-27 09:10:47.147247114 +0000 UTC m=+1031.438341955" observedRunningTime="2026-01-27 09:11:22.270244507 +0000 UTC m=+1066.561339348" watchObservedRunningTime="2026-01-27 09:11:22.280632402 +0000 UTC m=+1066.571727243" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.322990 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.549009786 podStartE2EDuration="56.322967022s" podCreationTimestamp="2026-01-27 09:10:26 +0000 UTC" firstStartedPulling="2026-01-27 09:10:34.567965743 +0000 UTC m=+1018.859060584" lastFinishedPulling="2026-01-27 09:10:47.341922969 +0000 UTC m=+1031.633017820" observedRunningTime="2026-01-27 09:11:22.312756642 +0000 UTC m=+1066.603851493" watchObservedRunningTime="2026-01-27 09:11:22.322967022 +0000 UTC m=+1066.614061863" Jan 27 09:11:22 crc kubenswrapper[4985]: I0127 09:11:22.751273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tsdq2"] Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.240971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tsdq2" event={"ID":"eb511c2d-40ad-47b6-a515-1101d1ff3b5f","Type":"ContainerStarted","Data":"e6f93cefeb8e24d8d3b24c72f6e9851dd0484e4a211e340d9386fc850c764334"} Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.600550 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722718 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlqlx\" (UniqueName: \"kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.722953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf\") pod \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\" (UID: \"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b\") " Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.723557 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.723928 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.730042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx" (OuterVolumeSpecName: "kube-api-access-qlqlx") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "kube-api-access-qlqlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.736646 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.751761 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.752687 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.767115 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts" (OuterVolumeSpecName: "scripts") pod "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" (UID: "0c0c0d06-870e-469e-bacf-dc5aa8af9d3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.824760 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825101 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825115 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825126 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825139 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825152 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlqlx\" (UniqueName: \"kubernetes.io/projected/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-kube-api-access-qlqlx\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:23 crc kubenswrapper[4985]: I0127 09:11:23.825167 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c0c0d06-870e-469e-bacf-dc5aa8af9d3b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:24 crc kubenswrapper[4985]: I0127 09:11:24.249900 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nzqqd" event={"ID":"0c0c0d06-870e-469e-bacf-dc5aa8af9d3b","Type":"ContainerDied","Data":"c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a"} Jan 27 09:11:24 crc kubenswrapper[4985]: I0127 09:11:24.249945 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c388601c58c9df1a2760fff827ae1ea876abab135df300117e5c6fcd7d1f275a" Jan 27 09:11:24 crc kubenswrapper[4985]: I0127 09:11:24.250007 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nzqqd" Jan 27 09:11:24 crc kubenswrapper[4985]: I0127 09:11:24.721358 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rvjmw"] Jan 27 09:11:24 crc kubenswrapper[4985]: I0127 09:11:24.728859 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rvjmw"] Jan 27 09:11:26 crc kubenswrapper[4985]: I0127 09:11:26.465664 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6148d0b5-adcf-4096-9499-eae7a0c703f0" path="/var/lib/kubelet/pods/6148d0b5-adcf-4096-9499-eae7a0c703f0/volumes" Jan 27 09:11:27 crc kubenswrapper[4985]: I0127 09:11:27.691610 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2zjxh" podUID="d2f52eee-3926-4ed2-9058-4e159f11a6cf" containerName="ovn-controller" probeResult="failure" output=< Jan 27 09:11:27 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 09:11:27 crc kubenswrapper[4985]: > Jan 27 09:11:27 crc kubenswrapper[4985]: I0127 09:11:27.793118 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:11:27 crc kubenswrapper[4985]: I0127 09:11:27.802720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m82tc" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.024409 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2zjxh-config-22tjl"] Jan 27 09:11:28 crc kubenswrapper[4985]: E0127 09:11:28.024790 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" containerName="swift-ring-rebalance" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.024810 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" containerName="swift-ring-rebalance" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.025026 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0c0d06-870e-469e-bacf-dc5aa8af9d3b" containerName="swift-ring-rebalance" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.025578 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.027961 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.042225 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2zjxh-config-22tjl"] Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.100965 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.101043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.101097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.101124 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.101159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnqg\" (UniqueName: \"kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.101209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202406 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnqg\" (UniqueName: \"kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202918 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.203476 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.202773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.203567 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.203609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.204429 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.236533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnqg\" (UniqueName: \"kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg\") pod \"ovn-controller-2zjxh-config-22tjl\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.349961 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:28 crc kubenswrapper[4985]: I0127 09:11:28.842756 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2zjxh-config-22tjl"] Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.290777 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2zjxh-config-22tjl" event={"ID":"f3bacc3a-c56d-4e81-900f-3ef54d353593","Type":"ContainerStarted","Data":"dd1caa9259f8df02ba58bd4697ec24f0820883b5b44394d952d0efda370e4f9c"} Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.734205 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ml6s7"] Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.735938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.738790 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.747129 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ml6s7"] Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.834142 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs892\" (UniqueName: \"kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.834257 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.935673 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs892\" (UniqueName: \"kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.935746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.936987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:29 crc kubenswrapper[4985]: I0127 09:11:29.957970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs892\" (UniqueName: \"kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892\") pod \"root-account-create-update-ml6s7\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:30 crc kubenswrapper[4985]: I0127 09:11:30.052464 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:30 crc kubenswrapper[4985]: I0127 09:11:30.302281 4985 generic.go:334] "Generic (PLEG): container finished" podID="f3bacc3a-c56d-4e81-900f-3ef54d353593" containerID="21fb31c114570373f2aefe08ae052a814b7f260605d4f1b18fb18d7b651763b1" exitCode=0 Jan 27 09:11:30 crc kubenswrapper[4985]: I0127 09:11:30.302340 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2zjxh-config-22tjl" event={"ID":"f3bacc3a-c56d-4e81-900f-3ef54d353593","Type":"ContainerDied","Data":"21fb31c114570373f2aefe08ae052a814b7f260605d4f1b18fb18d7b651763b1"} Jan 27 09:11:32 crc kubenswrapper[4985]: I0127 09:11:32.693522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2zjxh" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.177370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.186030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/50364737-e2dc-4bd7-ba5a-97f39e232236-etc-swift\") pod \"swift-storage-0\" (UID: \"50364737-e2dc-4bd7-ba5a-97f39e232236\") " pod="openstack/swift-storage-0" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.310072 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.361181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2zjxh-config-22tjl" event={"ID":"f3bacc3a-c56d-4e81-900f-3ef54d353593","Type":"ContainerDied","Data":"dd1caa9259f8df02ba58bd4697ec24f0820883b5b44394d952d0efda370e4f9c"} Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.361227 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd1caa9259f8df02ba58bd4697ec24f0820883b5b44394d952d0efda370e4f9c" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.361243 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2zjxh-config-22tjl" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380361 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccnqg\" (UniqueName: \"kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380404 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380492 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380627 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.380661 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run\") pod \"f3bacc3a-c56d-4e81-900f-3ef54d353593\" (UID: \"f3bacc3a-c56d-4e81-900f-3ef54d353593\") " Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.381243 4985 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.381283 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run" (OuterVolumeSpecName: "var-run") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.381310 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.381827 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.382220 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts" (OuterVolumeSpecName: "scripts") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.385528 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg" (OuterVolumeSpecName: "kube-api-access-ccnqg") pod "f3bacc3a-c56d-4e81-900f-3ef54d353593" (UID: "f3bacc3a-c56d-4e81-900f-3ef54d353593"). InnerVolumeSpecName "kube-api-access-ccnqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.400506 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.483331 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccnqg\" (UniqueName: \"kubernetes.io/projected/f3bacc3a-c56d-4e81-900f-3ef54d353593-kube-api-access-ccnqg\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.483490 4985 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.483582 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3bacc3a-c56d-4e81-900f-3ef54d353593-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.483666 4985 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.483744 4985 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3bacc3a-c56d-4e81-900f-3ef54d353593-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.560218 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ml6s7"] Jan 27 09:11:36 crc kubenswrapper[4985]: W0127 09:11:36.574059 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f893c0_3ad5_46a5_b005_bcbdd13e7b09.slice/crio-e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4 WatchSource:0}: Error finding container e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4: Status 404 returned error can't find the container with id e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4 Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.581766 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 09:11:36 crc kubenswrapper[4985]: I0127 09:11:36.937917 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 09:11:36 crc kubenswrapper[4985]: W0127 09:11:36.945765 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50364737_e2dc_4bd7_ba5a_97f39e232236.slice/crio-36c48092d2f2434f2b9e183c24005450e0f24a0890df04bf78060b4d3fe1cdca WatchSource:0}: Error finding container 36c48092d2f2434f2b9e183c24005450e0f24a0890df04bf78060b4d3fe1cdca: Status 404 returned error can't find the container with id 36c48092d2f2434f2b9e183c24005450e0f24a0890df04bf78060b4d3fe1cdca Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.401952 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tsdq2" event={"ID":"eb511c2d-40ad-47b6-a515-1101d1ff3b5f","Type":"ContainerStarted","Data":"0885d30dadcc106099923fc0987e04782af9e54baa246194b717ec0e98ac9ba7"} Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.406688 4985 generic.go:334] "Generic (PLEG): container finished" podID="07f893c0-3ad5-46a5-b005-bcbdd13e7b09" containerID="d10a79d1d9448588e3b9cb2612006bb8f2575ab3b0feff0ae8b09006dd5e1695" exitCode=0 Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.406847 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ml6s7" event={"ID":"07f893c0-3ad5-46a5-b005-bcbdd13e7b09","Type":"ContainerDied","Data":"d10a79d1d9448588e3b9cb2612006bb8f2575ab3b0feff0ae8b09006dd5e1695"} Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.406887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ml6s7" event={"ID":"07f893c0-3ad5-46a5-b005-bcbdd13e7b09","Type":"ContainerStarted","Data":"e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4"} Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.425754 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"36c48092d2f2434f2b9e183c24005450e0f24a0890df04bf78060b4d3fe1cdca"} Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.426985 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2zjxh-config-22tjl"] Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.447881 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tsdq2" podStartSLOduration=3.0006376 podStartE2EDuration="16.447858297s" podCreationTimestamp="2026-01-27 09:11:21 +0000 UTC" firstStartedPulling="2026-01-27 09:11:22.753524322 +0000 UTC m=+1067.044619163" lastFinishedPulling="2026-01-27 09:11:36.200745019 +0000 UTC m=+1080.491839860" observedRunningTime="2026-01-27 09:11:37.426158932 +0000 UTC m=+1081.717253773" watchObservedRunningTime="2026-01-27 09:11:37.447858297 +0000 UTC m=+1081.738953138" Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.449786 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2zjxh-config-22tjl"] Jan 27 09:11:37 crc kubenswrapper[4985]: I0127 09:11:37.970309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.233671 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.333343 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w6tww"] Jan 27 09:11:38 crc kubenswrapper[4985]: E0127 09:11:38.333794 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bacc3a-c56d-4e81-900f-3ef54d353593" containerName="ovn-config" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.333810 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bacc3a-c56d-4e81-900f-3ef54d353593" containerName="ovn-config" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.334080 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3bacc3a-c56d-4e81-900f-3ef54d353593" containerName="ovn-config" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.334914 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.354448 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w6tww"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.430765 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt5f\" (UniqueName: \"kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.431134 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.445076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"b965c71e462983a7308e2dd4803179cd01843843a618e91ef9cb2f646c273977"} Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.463013 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3bacc3a-c56d-4e81-900f-3ef54d353593" path="/var/lib/kubelet/pods/f3bacc3a-c56d-4e81-900f-3ef54d353593/volumes" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.514611 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1f54-account-create-update-pdcwv"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.522258 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.528536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.532580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt5f\" (UniqueName: \"kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.532689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.532736 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8cj\" (UniqueName: \"kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.532782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.534565 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.555441 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1f54-account-create-update-pdcwv"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.559937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt5f\" (UniqueName: \"kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f\") pod \"barbican-db-create-w6tww\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.606161 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-m6p82"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.608975 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.633786 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d487-account-create-update-hppt6"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.634018 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.634138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmwc\" (UniqueName: \"kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.634260 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.634338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8cj\" (UniqueName: \"kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.635226 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.635666 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.637392 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.640721 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6p82"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.650339 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d487-account-create-update-hppt6"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.673241 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8cj\" (UniqueName: \"kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj\") pod \"cinder-1f54-account-create-update-pdcwv\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.709984 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rxmmb"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.711264 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.725472 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.735544 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.735657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmwc\" (UniqueName: \"kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.736903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.741911 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rxmmb"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.760261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmwc\" (UniqueName: \"kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc\") pod \"cinder-db-create-m6p82\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.839225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.839614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.839704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2mn\" (UniqueName: \"kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.839745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8m4q\" (UniqueName: \"kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.849032 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.894817 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jw5fd"] Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.896279 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.902459 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.902748 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xr2ld" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.902887 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.906931 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.937503 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.941905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.941968 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.942047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2mn\" (UniqueName: \"kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.942097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8m4q\" (UniqueName: \"kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.943264 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:38 crc kubenswrapper[4985]: I0127 09:11:38.943332 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.000460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2mn\" (UniqueName: \"kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn\") pod \"neutron-db-create-rxmmb\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.013414 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.017013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8m4q\" (UniqueName: \"kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q\") pod \"barbican-d487-account-create-update-hppt6\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.019963 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jw5fd"] Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.047685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.048048 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmdd\" (UniqueName: \"kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.048151 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.048392 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.063668 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8473-account-create-update-p98cs"] Jan 27 09:11:39 crc kubenswrapper[4985]: E0127 09:11:39.065211 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f893c0-3ad5-46a5-b005-bcbdd13e7b09" containerName="mariadb-account-create-update" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.065238 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f893c0-3ad5-46a5-b005-bcbdd13e7b09" containerName="mariadb-account-create-update" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.075416 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f893c0-3ad5-46a5-b005-bcbdd13e7b09" containerName="mariadb-account-create-update" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.076133 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.084032 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.091070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8473-account-create-update-p98cs"] Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts\") pod \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149523 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs892\" (UniqueName: \"kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892\") pod \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\" (UID: \"07f893c0-3ad5-46a5-b005-bcbdd13e7b09\") " Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149819 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149851 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfn5\" (UniqueName: \"kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.149949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmdd\" (UniqueName: \"kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.150622 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07f893c0-3ad5-46a5-b005-bcbdd13e7b09" (UID: "07f893c0-3ad5-46a5-b005-bcbdd13e7b09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.155958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892" (OuterVolumeSpecName: "kube-api-access-vs892") pod "07f893c0-3ad5-46a5-b005-bcbdd13e7b09" (UID: "07f893c0-3ad5-46a5-b005-bcbdd13e7b09"). InnerVolumeSpecName "kube-api-access-vs892". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.159449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.161619 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.172267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmdd\" (UniqueName: \"kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd\") pod \"keystone-db-sync-jw5fd\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.261069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.262067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfn5\" (UniqueName: \"kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.262341 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs892\" (UniqueName: \"kubernetes.io/projected/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-kube-api-access-vs892\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.262360 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f893c0-3ad5-46a5-b005-bcbdd13e7b09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.263476 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.276062 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.285292 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfn5\" (UniqueName: \"kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5\") pod \"neutron-8473-account-create-update-p98cs\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.290251 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.465174 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ml6s7" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.465267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ml6s7" event={"ID":"07f893c0-3ad5-46a5-b005-bcbdd13e7b09","Type":"ContainerDied","Data":"e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4"} Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.465316 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e968e3e9c60a224d7653f62b82267f75e94082a98638993f2a781d9a017524c4" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.470970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"b49588e918057608bd41297649da4be5bbf5a9a48bfb304fa84c2bf4df4a5d05"} Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.471020 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"1d6cc3aea5fcb7d083cf4b530fc5e12fe7c77f6abd20e362cbcdb7c2089576db"} Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.483936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1f54-account-create-update-pdcwv"] Jan 27 09:11:39 crc kubenswrapper[4985]: W0127 09:11:39.517010 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfa5488d_50d7_4f03_8187_83db05387838.slice/crio-50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6 WatchSource:0}: Error finding container 50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6: Status 404 returned error can't find the container with id 50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6 Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.517625 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w6tww"] Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.574063 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.577784 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6p82"] Jan 27 09:11:39 crc kubenswrapper[4985]: W0127 09:11:39.589183 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c783dc_2abc_4a6a_99d0_d56e5826898f.slice/crio-6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc WatchSource:0}: Error finding container 6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc: Status 404 returned error can't find the container with id 6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.705949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rxmmb"] Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.718536 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d487-account-create-update-hppt6"] Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.838648 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jw5fd"] Jan 27 09:11:39 crc kubenswrapper[4985]: W0127 09:11:39.852709 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933bd11f_35db_489e_aacf_b2ba95de3154.slice/crio-ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c WatchSource:0}: Error finding container ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c: Status 404 returned error can't find the container with id ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c Jan 27 09:11:39 crc kubenswrapper[4985]: I0127 09:11:39.970274 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8473-account-create-update-p98cs"] Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.522561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"48f3dee14f2a29f81af7b45cc1f312ce2f4c3a3cb3ee8f71d0503b47f8493fde"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.524184 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jw5fd" event={"ID":"933bd11f-35db-489e-aacf-b2ba95de3154","Type":"ContainerStarted","Data":"ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.541577 4985 generic.go:334] "Generic (PLEG): container finished" podID="c07ec733-232f-43bf-868d-d0a25592faec" containerID="33adb145e1f1cd11f11fd29864020c69d36bc1c528c8d6d757292a858a0831c3" exitCode=0 Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.541654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rxmmb" event={"ID":"c07ec733-232f-43bf-868d-d0a25592faec","Type":"ContainerDied","Data":"33adb145e1f1cd11f11fd29864020c69d36bc1c528c8d6d757292a858a0831c3"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.541680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rxmmb" event={"ID":"c07ec733-232f-43bf-868d-d0a25592faec","Type":"ContainerStarted","Data":"81e8adac442bc69517f432bdc504181c94eb0f2e7c7af8ae2d69464ef1f217e1"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.545962 4985 generic.go:334] "Generic (PLEG): container finished" podID="b9c783dc-2abc-4a6a-99d0-d56e5826898f" containerID="ca82b3a94a90753b3fd98871bab16a13a0b22f4bb5accb8ff68c96678e356ec4" exitCode=0 Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.546065 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6p82" event={"ID":"b9c783dc-2abc-4a6a-99d0-d56e5826898f","Type":"ContainerDied","Data":"ca82b3a94a90753b3fd98871bab16a13a0b22f4bb5accb8ff68c96678e356ec4"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.547090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6p82" event={"ID":"b9c783dc-2abc-4a6a-99d0-d56e5826898f","Type":"ContainerStarted","Data":"6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.551796 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8473-account-create-update-p98cs" event={"ID":"d0162801-70eb-4094-b33d-3063eb978eef","Type":"ContainerStarted","Data":"7e492508355c2f0f68e2714d3c5cb6ffb387b30da8e92d1aeeae397c3c432322"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.551840 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8473-account-create-update-p98cs" event={"ID":"d0162801-70eb-4094-b33d-3063eb978eef","Type":"ContainerStarted","Data":"51e7385b73e4499a5684f70c265506e45844d99330507f64c1ea642b6b6bacbe"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.561962 4985 generic.go:334] "Generic (PLEG): container finished" podID="5246fdaf-1bb4-454a-bf60-b0372f3ae653" containerID="c00d7d5aac014e820537aa9bc9904b206c40b5be7548a77332fef5cdaf859bf2" exitCode=0 Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.562116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w6tww" event={"ID":"5246fdaf-1bb4-454a-bf60-b0372f3ae653","Type":"ContainerDied","Data":"c00d7d5aac014e820537aa9bc9904b206c40b5be7548a77332fef5cdaf859bf2"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.562144 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w6tww" event={"ID":"5246fdaf-1bb4-454a-bf60-b0372f3ae653","Type":"ContainerStarted","Data":"74dc80e6b5324fc867adc95aef5f70aefdd1a8b85371b6266386d9f341f96322"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.564642 4985 generic.go:334] "Generic (PLEG): container finished" podID="dfa5488d-50d7-4f03-8187-83db05387838" containerID="5280aa4a3f1b2967de9440e3a75640468d2c10f53dbb874651e283cf10342c31" exitCode=0 Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.564732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f54-account-create-update-pdcwv" event={"ID":"dfa5488d-50d7-4f03-8187-83db05387838","Type":"ContainerDied","Data":"5280aa4a3f1b2967de9440e3a75640468d2c10f53dbb874651e283cf10342c31"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.564753 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f54-account-create-update-pdcwv" event={"ID":"dfa5488d-50d7-4f03-8187-83db05387838","Type":"ContainerStarted","Data":"50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.566607 4985 generic.go:334] "Generic (PLEG): container finished" podID="e3166563-f008-4d79-911a-55c399e8d65d" containerID="92250c43949659c35f14c82e2ce432bd37907b95b57df4adb4e9f523eb4434dc" exitCode=0 Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.566645 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d487-account-create-update-hppt6" event={"ID":"e3166563-f008-4d79-911a-55c399e8d65d","Type":"ContainerDied","Data":"92250c43949659c35f14c82e2ce432bd37907b95b57df4adb4e9f523eb4434dc"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.566667 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d487-account-create-update-hppt6" event={"ID":"e3166563-f008-4d79-911a-55c399e8d65d","Type":"ContainerStarted","Data":"3bf814c01ae905763488244508bda9af97ab4b1f769c90e73f2859998f9702f2"} Jan 27 09:11:40 crc kubenswrapper[4985]: I0127 09:11:40.597209 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8473-account-create-update-p98cs" podStartSLOduration=2.597184266 podStartE2EDuration="2.597184266s" podCreationTimestamp="2026-01-27 09:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:40.595324315 +0000 UTC m=+1084.886419166" watchObservedRunningTime="2026-01-27 09:11:40.597184266 +0000 UTC m=+1084.888279107" Jan 27 09:11:41 crc kubenswrapper[4985]: I0127 09:11:41.580360 4985 generic.go:334] "Generic (PLEG): container finished" podID="d0162801-70eb-4094-b33d-3063eb978eef" containerID="7e492508355c2f0f68e2714d3c5cb6ffb387b30da8e92d1aeeae397c3c432322" exitCode=0 Jan 27 09:11:41 crc kubenswrapper[4985]: I0127 09:11:41.580504 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8473-account-create-update-p98cs" event={"ID":"d0162801-70eb-4094-b33d-3063eb978eef","Type":"ContainerDied","Data":"7e492508355c2f0f68e2714d3c5cb6ffb387b30da8e92d1aeeae397c3c432322"} Jan 27 09:11:41 crc kubenswrapper[4985]: I0127 09:11:41.599649 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"42d7a1334b36bdb3c1864f5e01b2e9b30b6c603a489fa698fed35e818e0a2840"} Jan 27 09:11:41 crc kubenswrapper[4985]: I0127 09:11:41.599705 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"d59f43d08059d80752f3c1149d6ca0ba8403c223cc5d55383b72aaa24db3d9ff"} Jan 27 09:11:41 crc kubenswrapper[4985]: I0127 09:11:41.941947 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.022352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8m4q\" (UniqueName: \"kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q\") pod \"e3166563-f008-4d79-911a-55c399e8d65d\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.022535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts\") pod \"e3166563-f008-4d79-911a-55c399e8d65d\" (UID: \"e3166563-f008-4d79-911a-55c399e8d65d\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.023792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3166563-f008-4d79-911a-55c399e8d65d" (UID: "e3166563-f008-4d79-911a-55c399e8d65d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.030991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q" (OuterVolumeSpecName: "kube-api-access-h8m4q") pod "e3166563-f008-4d79-911a-55c399e8d65d" (UID: "e3166563-f008-4d79-911a-55c399e8d65d"). InnerVolumeSpecName "kube-api-access-h8m4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.065191 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.124831 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3166563-f008-4d79-911a-55c399e8d65d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.124866 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8m4q\" (UniqueName: \"kubernetes.io/projected/e3166563-f008-4d79-911a-55c399e8d65d-kube-api-access-h8m4q\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.226763 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbmwc\" (UniqueName: \"kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc\") pod \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.226872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts\") pod \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\" (UID: \"b9c783dc-2abc-4a6a-99d0-d56e5826898f\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.228254 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9c783dc-2abc-4a6a-99d0-d56e5826898f" (UID: "b9c783dc-2abc-4a6a-99d0-d56e5826898f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.233524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc" (OuterVolumeSpecName: "kube-api-access-zbmwc") pod "b9c783dc-2abc-4a6a-99d0-d56e5826898f" (UID: "b9c783dc-2abc-4a6a-99d0-d56e5826898f"). InnerVolumeSpecName "kube-api-access-zbmwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.328482 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c783dc-2abc-4a6a-99d0-d56e5826898f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.328537 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbmwc\" (UniqueName: \"kubernetes.io/projected/b9c783dc-2abc-4a6a-99d0-d56e5826898f-kube-api-access-zbmwc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.330332 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.336897 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.341444 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430194 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts\") pod \"c07ec733-232f-43bf-868d-d0a25592faec\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430287 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8cj\" (UniqueName: \"kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj\") pod \"dfa5488d-50d7-4f03-8187-83db05387838\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430312 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts\") pod \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430361 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqt5f\" (UniqueName: \"kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f\") pod \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\" (UID: \"5246fdaf-1bb4-454a-bf60-b0372f3ae653\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430465 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts\") pod \"dfa5488d-50d7-4f03-8187-83db05387838\" (UID: \"dfa5488d-50d7-4f03-8187-83db05387838\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.430488 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2mn\" (UniqueName: \"kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn\") pod \"c07ec733-232f-43bf-868d-d0a25592faec\" (UID: \"c07ec733-232f-43bf-868d-d0a25592faec\") " Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.431034 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c07ec733-232f-43bf-868d-d0a25592faec" (UID: "c07ec733-232f-43bf-868d-d0a25592faec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.431404 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfa5488d-50d7-4f03-8187-83db05387838" (UID: "dfa5488d-50d7-4f03-8187-83db05387838"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.431564 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5246fdaf-1bb4-454a-bf60-b0372f3ae653" (UID: "5246fdaf-1bb4-454a-bf60-b0372f3ae653"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.446932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f" (OuterVolumeSpecName: "kube-api-access-tqt5f") pod "5246fdaf-1bb4-454a-bf60-b0372f3ae653" (UID: "5246fdaf-1bb4-454a-bf60-b0372f3ae653"). InnerVolumeSpecName "kube-api-access-tqt5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.447014 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj" (OuterVolumeSpecName: "kube-api-access-ls8cj") pod "dfa5488d-50d7-4f03-8187-83db05387838" (UID: "dfa5488d-50d7-4f03-8187-83db05387838"). InnerVolumeSpecName "kube-api-access-ls8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.447035 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn" (OuterVolumeSpecName: "kube-api-access-ch2mn") pod "c07ec733-232f-43bf-868d-d0a25592faec" (UID: "c07ec733-232f-43bf-868d-d0a25592faec"). InnerVolumeSpecName "kube-api-access-ch2mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532459 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8cj\" (UniqueName: \"kubernetes.io/projected/dfa5488d-50d7-4f03-8187-83db05387838-kube-api-access-ls8cj\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532494 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5246fdaf-1bb4-454a-bf60-b0372f3ae653-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532532 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqt5f\" (UniqueName: \"kubernetes.io/projected/5246fdaf-1bb4-454a-bf60-b0372f3ae653-kube-api-access-tqt5f\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532544 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfa5488d-50d7-4f03-8187-83db05387838-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532554 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch2mn\" (UniqueName: \"kubernetes.io/projected/c07ec733-232f-43bf-868d-d0a25592faec-kube-api-access-ch2mn\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.532565 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07ec733-232f-43bf-868d-d0a25592faec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.615729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f54-account-create-update-pdcwv" event={"ID":"dfa5488d-50d7-4f03-8187-83db05387838","Type":"ContainerDied","Data":"50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.615848 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50c3c7f38daba2728d91dec80ba963eef3fa30cafa48e4177ffffb607bda51d6" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.616149 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f54-account-create-update-pdcwv" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.650261 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"1f54ac401f0867b8bf1df8009f1f2df6a4f6320026c5db4291f090245373dc4a"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.650323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"31ca56eb684db7cc86ea310d70fe27d5d2020bfd800e4fd78a5a99e32faf20bd"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.654961 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d487-account-create-update-hppt6" event={"ID":"e3166563-f008-4d79-911a-55c399e8d65d","Type":"ContainerDied","Data":"3bf814c01ae905763488244508bda9af97ab4b1f769c90e73f2859998f9702f2"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.655013 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf814c01ae905763488244508bda9af97ab4b1f769c90e73f2859998f9702f2" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.655098 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d487-account-create-update-hppt6" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.660299 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6p82" event={"ID":"b9c783dc-2abc-4a6a-99d0-d56e5826898f","Type":"ContainerDied","Data":"6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.660364 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a417cd82c949f903050f1a9c113e040c50e7742c91471bb06764629e7d0cfcc" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.660409 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6p82" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.664502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w6tww" event={"ID":"5246fdaf-1bb4-454a-bf60-b0372f3ae653","Type":"ContainerDied","Data":"74dc80e6b5324fc867adc95aef5f70aefdd1a8b85371b6266386d9f341f96322"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.664561 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74dc80e6b5324fc867adc95aef5f70aefdd1a8b85371b6266386d9f341f96322" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.664621 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w6tww" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.682628 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rxmmb" Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.683971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rxmmb" event={"ID":"c07ec733-232f-43bf-868d-d0a25592faec","Type":"ContainerDied","Data":"81e8adac442bc69517f432bdc504181c94eb0f2e7c7af8ae2d69464ef1f217e1"} Jan 27 09:11:42 crc kubenswrapper[4985]: I0127 09:11:42.684019 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e8adac442bc69517f432bdc504181c94eb0f2e7c7af8ae2d69464ef1f217e1" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.320311 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.428338 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whfn5\" (UniqueName: \"kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5\") pod \"d0162801-70eb-4094-b33d-3063eb978eef\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.428441 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts\") pod \"d0162801-70eb-4094-b33d-3063eb978eef\" (UID: \"d0162801-70eb-4094-b33d-3063eb978eef\") " Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.429723 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0162801-70eb-4094-b33d-3063eb978eef" (UID: "d0162801-70eb-4094-b33d-3063eb978eef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.433097 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5" (OuterVolumeSpecName: "kube-api-access-whfn5") pod "d0162801-70eb-4094-b33d-3063eb978eef" (UID: "d0162801-70eb-4094-b33d-3063eb978eef"). InnerVolumeSpecName "kube-api-access-whfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.530899 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0162801-70eb-4094-b33d-3063eb978eef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.530954 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whfn5\" (UniqueName: \"kubernetes.io/projected/d0162801-70eb-4094-b33d-3063eb978eef-kube-api-access-whfn5\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.725348 4985 generic.go:334] "Generic (PLEG): container finished" podID="eb511c2d-40ad-47b6-a515-1101d1ff3b5f" containerID="0885d30dadcc106099923fc0987e04782af9e54baa246194b717ec0e98ac9ba7" exitCode=0 Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.725466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tsdq2" event={"ID":"eb511c2d-40ad-47b6-a515-1101d1ff3b5f","Type":"ContainerDied","Data":"0885d30dadcc106099923fc0987e04782af9e54baa246194b717ec0e98ac9ba7"} Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.732398 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"03acb16f09dc177a2207d119a59f88c68732c84e7ee8746dc92b6e596023afd4"} Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.732449 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"8c3b99d5c40d05a9e416f3049ecfe5f4b6b719429fb121a2cac6248144bb0c43"} Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.737648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jw5fd" event={"ID":"933bd11f-35db-489e-aacf-b2ba95de3154","Type":"ContainerStarted","Data":"c1b73e907b51abb61a6365bb340ea6d60923dbba468110ea419cce35020a2db2"} Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.747211 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8473-account-create-update-p98cs" event={"ID":"d0162801-70eb-4094-b33d-3063eb978eef","Type":"ContainerDied","Data":"51e7385b73e4499a5684f70c265506e45844d99330507f64c1ea642b6b6bacbe"} Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.747281 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e7385b73e4499a5684f70c265506e45844d99330507f64c1ea642b6b6bacbe" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.747397 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8473-account-create-update-p98cs" Jan 27 09:11:46 crc kubenswrapper[4985]: I0127 09:11:46.767444 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jw5fd" podStartSLOduration=2.639223948 podStartE2EDuration="8.767425865s" podCreationTimestamp="2026-01-27 09:11:38 +0000 UTC" firstStartedPulling="2026-01-27 09:11:39.855505479 +0000 UTC m=+1084.146600320" lastFinishedPulling="2026-01-27 09:11:45.983707396 +0000 UTC m=+1090.274802237" observedRunningTime="2026-01-27 09:11:46.764335 +0000 UTC m=+1091.055429841" watchObservedRunningTime="2026-01-27 09:11:46.767425865 +0000 UTC m=+1091.058520696" Jan 27 09:11:47 crc kubenswrapper[4985]: I0127 09:11:47.768963 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"c99301733121f72625f32e175e5b622016643d4c3280032df99fc7d2c3056c4f"} Jan 27 09:11:47 crc kubenswrapper[4985]: I0127 09:11:47.769404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"00578276762c6cb330cf1c70f29115d7049165a702a4da38d14c553041c6f437"} Jan 27 09:11:47 crc kubenswrapper[4985]: I0127 09:11:47.769436 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"425bc03ce5b95223b8c11aa2b536250d452b47139b6911049a312ad3089d1e6b"} Jan 27 09:11:47 crc kubenswrapper[4985]: I0127 09:11:47.769452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"8d61f2cf6fb47fd4bac6aab5c5bcd258b5c8124d68aac856f2cbfe73f6e0c8a4"} Jan 27 09:11:47 crc kubenswrapper[4985]: I0127 09:11:47.769461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"50364737-e2dc-4bd7-ba5a-97f39e232236","Type":"ContainerStarted","Data":"f08e076d8372bc415ef61c32e4de1603579d3203283956ba383c50850336e36b"} Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.093293 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.064927484 podStartE2EDuration="45.093269229s" podCreationTimestamp="2026-01-27 09:11:03 +0000 UTC" firstStartedPulling="2026-01-27 09:11:36.94805475 +0000 UTC m=+1081.239149591" lastFinishedPulling="2026-01-27 09:11:45.976396495 +0000 UTC m=+1090.267491336" observedRunningTime="2026-01-27 09:11:47.823387883 +0000 UTC m=+1092.114482754" watchObservedRunningTime="2026-01-27 09:11:48.093269229 +0000 UTC m=+1092.384364070" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.102625 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.103899 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3166563-f008-4d79-911a-55c399e8d65d" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.103926 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3166563-f008-4d79-911a-55c399e8d65d" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.103943 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5246fdaf-1bb4-454a-bf60-b0372f3ae653" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.103951 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5246fdaf-1bb4-454a-bf60-b0372f3ae653" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.103963 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07ec733-232f-43bf-868d-d0a25592faec" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.103971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07ec733-232f-43bf-868d-d0a25592faec" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.103986 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c783dc-2abc-4a6a-99d0-d56e5826898f" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.103993 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c783dc-2abc-4a6a-99d0-d56e5826898f" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.104016 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0162801-70eb-4094-b33d-3063eb978eef" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104024 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0162801-70eb-4094-b33d-3063eb978eef" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: E0127 09:11:48.104041 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa5488d-50d7-4f03-8187-83db05387838" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104048 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa5488d-50d7-4f03-8187-83db05387838" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104262 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5246fdaf-1bb4-454a-bf60-b0372f3ae653" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104274 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa5488d-50d7-4f03-8187-83db05387838" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104289 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3166563-f008-4d79-911a-55c399e8d65d" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104308 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07ec733-232f-43bf-868d-d0a25592faec" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104323 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0162801-70eb-4094-b33d-3063eb978eef" containerName="mariadb-account-create-update" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.104336 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c783dc-2abc-4a6a-99d0-d56e5826898f" containerName="mariadb-database-create" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.108730 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.113803 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.119930 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.178802 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.178858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.178937 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.179016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.179153 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.179301 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvj7b\" (UniqueName: \"kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.192148 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280080 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data\") pod \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle\") pod \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280226 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmc4\" (UniqueName: \"kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4\") pod \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280395 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data\") pod \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\" (UID: \"eb511c2d-40ad-47b6-a515-1101d1ff3b5f\") " Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280629 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvj7b\" (UniqueName: \"kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.280860 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.281868 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.282480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.283110 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.283600 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.283797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.290267 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb511c2d-40ad-47b6-a515-1101d1ff3b5f" (UID: "eb511c2d-40ad-47b6-a515-1101d1ff3b5f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.299049 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4" (OuterVolumeSpecName: "kube-api-access-fwmc4") pod "eb511c2d-40ad-47b6-a515-1101d1ff3b5f" (UID: "eb511c2d-40ad-47b6-a515-1101d1ff3b5f"). InnerVolumeSpecName "kube-api-access-fwmc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.301874 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvj7b\" (UniqueName: \"kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b\") pod \"dnsmasq-dns-8db84466c-d9wbs\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.327460 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb511c2d-40ad-47b6-a515-1101d1ff3b5f" (UID: "eb511c2d-40ad-47b6-a515-1101d1ff3b5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.364744 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data" (OuterVolumeSpecName: "config-data") pod "eb511c2d-40ad-47b6-a515-1101d1ff3b5f" (UID: "eb511c2d-40ad-47b6-a515-1101d1ff3b5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.382774 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwmc4\" (UniqueName: \"kubernetes.io/projected/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-kube-api-access-fwmc4\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.382833 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.382845 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.382854 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb511c2d-40ad-47b6-a515-1101d1ff3b5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.486901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.786852 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tsdq2" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.786863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tsdq2" event={"ID":"eb511c2d-40ad-47b6-a515-1101d1ff3b5f","Type":"ContainerDied","Data":"e6f93cefeb8e24d8d3b24c72f6e9851dd0484e4a211e340d9386fc850c764334"} Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.786938 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f93cefeb8e24d8d3b24c72f6e9851dd0484e4a211e340d9386fc850c764334" Jan 27 09:11:48 crc kubenswrapper[4985]: I0127 09:11:48.809908 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.166394 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.204813 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:49 crc kubenswrapper[4985]: E0127 09:11:49.205244 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb511c2d-40ad-47b6-a515-1101d1ff3b5f" containerName="glance-db-sync" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.205270 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb511c2d-40ad-47b6-a515-1101d1ff3b5f" containerName="glance-db-sync" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.205533 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb511c2d-40ad-47b6-a515-1101d1ff3b5f" containerName="glance-db-sync" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.206649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.218322 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315853 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvkd\" (UniqueName: \"kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315929 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.315987 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmvkd\" (UniqueName: \"kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.417778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.418752 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.419314 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.420189 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.420950 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.421714 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.441148 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmvkd\" (UniqueName: \"kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd\") pod \"dnsmasq-dns-74dfc89d77-kt62t\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.544370 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.794042 4985 generic.go:334] "Generic (PLEG): container finished" podID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" containerID="42711aef53726f1f77c5c4deb95994b0b5d28e2faffa75003487676daf29292e" exitCode=0 Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.794091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" event={"ID":"190bdfc7-f0d3-4339-9048-2bf40f7d19e7","Type":"ContainerDied","Data":"42711aef53726f1f77c5c4deb95994b0b5d28e2faffa75003487676daf29292e"} Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.794806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" event={"ID":"190bdfc7-f0d3-4339-9048-2bf40f7d19e7","Type":"ContainerStarted","Data":"effd351ca824cd94fa072c95f0283d8c3e2058046f9f3742af2a97e470cfbbe6"} Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.795993 4985 generic.go:334] "Generic (PLEG): container finished" podID="933bd11f-35db-489e-aacf-b2ba95de3154" containerID="c1b73e907b51abb61a6365bb340ea6d60923dbba468110ea419cce35020a2db2" exitCode=0 Jan 27 09:11:49 crc kubenswrapper[4985]: I0127 09:11:49.796055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jw5fd" event={"ID":"933bd11f-35db-489e-aacf-b2ba95de3154","Type":"ContainerDied","Data":"c1b73e907b51abb61a6365bb340ea6d60923dbba468110ea419cce35020a2db2"} Jan 27 09:11:50 crc kubenswrapper[4985]: I0127 09:11:50.036342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:50 crc kubenswrapper[4985]: E0127 09:11:50.054676 4985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 09:11:50 crc kubenswrapper[4985]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/190bdfc7-f0d3-4339-9048-2bf40f7d19e7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:50 crc kubenswrapper[4985]: > podSandboxID="effd351ca824cd94fa072c95f0283d8c3e2058046f9f3742af2a97e470cfbbe6" Jan 27 09:11:50 crc kubenswrapper[4985]: E0127 09:11:50.054889 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 09:11:50 crc kubenswrapper[4985]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66chbbh56dh7fhfh68chf9hfdhbdh587h5b9h568h68fh77h5b5h559h577h687h574h5d5h584h8chd9hb4h66h566h545h699h564h568h66fhc9q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvj7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8db84466c-d9wbs_openstack(190bdfc7-f0d3-4339-9048-2bf40f7d19e7): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/190bdfc7-f0d3-4339-9048-2bf40f7d19e7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 09:11:50 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 27 09:11:50 crc kubenswrapper[4985]: E0127 09:11:50.056046 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/190bdfc7-f0d3-4339-9048-2bf40f7d19e7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" podUID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" Jan 27 09:11:50 crc kubenswrapper[4985]: W0127 09:11:50.064747 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f5e2b3c_566f_48ed_974d_b517c4a98596.slice/crio-252aa4e965c2cf82b1d28c04867d8001ab99360758ff2c81396e5e375049430f WatchSource:0}: Error finding container 252aa4e965c2cf82b1d28c04867d8001ab99360758ff2c81396e5e375049430f: Status 404 returned error can't find the container with id 252aa4e965c2cf82b1d28c04867d8001ab99360758ff2c81396e5e375049430f Jan 27 09:11:50 crc kubenswrapper[4985]: I0127 09:11:50.807230 4985 generic.go:334] "Generic (PLEG): container finished" podID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerID="6cec3af15c3483536501e0c26d28c5d28592c97669ec287d4b8e7eb3a5f5e156" exitCode=0 Jan 27 09:11:50 crc kubenswrapper[4985]: I0127 09:11:50.807358 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" event={"ID":"7f5e2b3c-566f-48ed-974d-b517c4a98596","Type":"ContainerDied","Data":"6cec3af15c3483536501e0c26d28c5d28592c97669ec287d4b8e7eb3a5f5e156"} Jan 27 09:11:50 crc kubenswrapper[4985]: I0127 09:11:50.807672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" event={"ID":"7f5e2b3c-566f-48ed-974d-b517c4a98596","Type":"ContainerStarted","Data":"252aa4e965c2cf82b1d28c04867d8001ab99360758ff2c81396e5e375049430f"} Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.248546 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.254467 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353099 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle\") pod \"933bd11f-35db-489e-aacf-b2ba95de3154\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353155 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxmdd\" (UniqueName: \"kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd\") pod \"933bd11f-35db-489e-aacf-b2ba95de3154\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353196 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data\") pod \"933bd11f-35db-489e-aacf-b2ba95de3154\" (UID: \"933bd11f-35db-489e-aacf-b2ba95de3154\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353227 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353302 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353362 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvj7b\" (UniqueName: \"kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353413 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.353437 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb\") pod \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\" (UID: \"190bdfc7-f0d3-4339-9048-2bf40f7d19e7\") " Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.360090 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b" (OuterVolumeSpecName: "kube-api-access-fvj7b") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "kube-api-access-fvj7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.388258 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd" (OuterVolumeSpecName: "kube-api-access-bxmdd") pod "933bd11f-35db-489e-aacf-b2ba95de3154" (UID: "933bd11f-35db-489e-aacf-b2ba95de3154"). InnerVolumeSpecName "kube-api-access-bxmdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.408338 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "933bd11f-35db-489e-aacf-b2ba95de3154" (UID: "933bd11f-35db-489e-aacf-b2ba95de3154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.427079 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config" (OuterVolumeSpecName: "config") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.427550 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data" (OuterVolumeSpecName: "config-data") pod "933bd11f-35db-489e-aacf-b2ba95de3154" (UID: "933bd11f-35db-489e-aacf-b2ba95de3154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.435801 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.437400 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.443962 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.451766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "190bdfc7-f0d3-4339-9048-2bf40f7d19e7" (UID: "190bdfc7-f0d3-4339-9048-2bf40f7d19e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455848 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455874 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxmdd\" (UniqueName: \"kubernetes.io/projected/933bd11f-35db-489e-aacf-b2ba95de3154-kube-api-access-bxmdd\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455887 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933bd11f-35db-489e-aacf-b2ba95de3154-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455896 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455907 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455918 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455930 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvj7b\" (UniqueName: \"kubernetes.io/projected/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-kube-api-access-fvj7b\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455941 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.455952 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/190bdfc7-f0d3-4339-9048-2bf40f7d19e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.833824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" event={"ID":"7f5e2b3c-566f-48ed-974d-b517c4a98596","Type":"ContainerStarted","Data":"84e9536ae1200a93ccf94064949e4aadd1f0f6e3cbfb370fa9a4b2457d1479cb"} Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.834603 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.836699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" event={"ID":"190bdfc7-f0d3-4339-9048-2bf40f7d19e7","Type":"ContainerDied","Data":"effd351ca824cd94fa072c95f0283d8c3e2058046f9f3742af2a97e470cfbbe6"} Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.836742 4985 scope.go:117] "RemoveContainer" containerID="42711aef53726f1f77c5c4deb95994b0b5d28e2faffa75003487676daf29292e" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.836868 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-d9wbs" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.842109 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jw5fd" event={"ID":"933bd11f-35db-489e-aacf-b2ba95de3154","Type":"ContainerDied","Data":"ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c"} Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.842163 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece2bdb41770ba06609361983359804cad79299c029f409f09142982e873fb5c" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.842239 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jw5fd" Jan 27 09:11:51 crc kubenswrapper[4985]: I0127 09:11:51.921967 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" podStartSLOduration=2.921941016 podStartE2EDuration="2.921941016s" podCreationTimestamp="2026-01-27 09:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:51.876346647 +0000 UTC m=+1096.167441488" watchObservedRunningTime="2026-01-27 09:11:51.921941016 +0000 UTC m=+1096.213035857" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.085003 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.094853 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-d9wbs"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.183098 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.191033 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7kllv"] Jan 27 09:11:52 crc kubenswrapper[4985]: E0127 09:11:52.191420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933bd11f-35db-489e-aacf-b2ba95de3154" containerName="keystone-db-sync" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.191438 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="933bd11f-35db-489e-aacf-b2ba95de3154" containerName="keystone-db-sync" Jan 27 09:11:52 crc kubenswrapper[4985]: E0127 09:11:52.191447 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" containerName="init" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.191454 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" containerName="init" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.191645 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" containerName="init" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.191669 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="933bd11f-35db-489e-aacf-b2ba95de3154" containerName="keystone-db-sync" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.192244 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.194705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.194992 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.195131 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xr2ld" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.195251 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.196237 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.219276 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7kllv"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.251964 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.253563 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283189 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283210 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283233 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ssgj\" (UniqueName: \"kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283260 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283276 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283301 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283366 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283392 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283408 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283436 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.283459 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbnc\" (UniqueName: \"kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.291840 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.386939 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387001 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387085 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbnc\" (UniqueName: \"kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387109 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ssgj\" (UniqueName: \"kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387185 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387203 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.387229 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.389001 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.389607 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.389895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.390590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.392717 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.414976 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.419474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.423325 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ssgj\" (UniqueName: \"kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj\") pod \"dnsmasq-dns-5fdbfbc95f-kkc4g\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.432107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.432699 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.433112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.465380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbnc\" (UniqueName: \"kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc\") pod \"keystone-bootstrap-7kllv\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.481625 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190bdfc7-f0d3-4339-9048-2bf40f7d19e7" path="/var/lib/kubelet/pods/190bdfc7-f0d3-4339-9048-2bf40f7d19e7/volumes" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.482320 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ftb9l"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.490280 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.494474 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.505589 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.509215 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-j2q97" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.509382 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w2rf5" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.509435 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.509878 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.510074 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.510071 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.510324 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.517158 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.526426 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ftb9l"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.554600 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.575755 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-95tb6"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.576916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.582136 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.582453 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hjfvv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.584884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tzx7\" (UniqueName: \"kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592868 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sctm\" (UniqueName: \"kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.592972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.593012 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.593035 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.639735 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.643615 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696428 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tzx7\" (UniqueName: \"kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696575 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696609 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46sr\" (UniqueName: \"kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sctm\" (UniqueName: \"kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696720 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696773 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.696839 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdb78\" (UniqueName: \"kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.697897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.699156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.708717 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-95tb6"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.709904 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.722147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.722212 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.729455 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.754586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.756248 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tzx7\" (UniqueName: \"kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7\") pod \"neutron-db-sync-ftb9l\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.778264 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sctm\" (UniqueName: \"kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm\") pod \"horizon-8bc58698f-rrrdv\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.799707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.799927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800030 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46sr\" (UniqueName: \"kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800102 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800295 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800371 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdb78\" (UniqueName: \"kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.800472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.801210 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.804428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.808195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.815532 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.816709 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.823423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.829078 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.841527 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.845624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46sr\" (UniqueName: \"kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr\") pod \"barbican-db-sync-95tb6\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.846506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdb78\" (UniqueName: \"kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78\") pod \"horizon-779ccf4965-4dzg4\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.855871 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5tmw8"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.857442 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.877616 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.877645 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l7nsl" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.878288 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.885801 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.887397 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.890040 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.892619 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.894669 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.902783 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.902861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px878\" (UniqueName: \"kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.902902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.902937 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.902956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.916083 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-95tb6" Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.932081 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5tmw8"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.947862 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:11:52 crc kubenswrapper[4985]: I0127 09:11:52.969099 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.001938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.004983 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.008678 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tq9nh" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.009264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.009414 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.010422 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.018523 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.021676 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.022929 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.023077 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.031502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7fh\" (UniqueName: \"kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.031687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px878\" (UniqueName: \"kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.031880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.031936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.032067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.032104 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.032156 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.032188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.033674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.051912 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.060944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.061219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.063942 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.066078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.082060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px878\" (UniqueName: \"kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878\") pod \"placement-db-sync-5tmw8\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.092138 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.114995 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.116408 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.128188 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.149311 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.149433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.149496 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgwgj\" (UniqueName: \"kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7fh\" (UniqueName: \"kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158845 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158869 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.158990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159019 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159156 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk9l\" (UniqueName: \"kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159208 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159233 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159267 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159287 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159737 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.159959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.166066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.172645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.189970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.191989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.203423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7fh\" (UniqueName: \"kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh\") pod \"ceilometer-0\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.204561 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.224703 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tmw8" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.242722 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261434 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhv7\" (UniqueName: \"kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261506 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgk9l\" (UniqueName: \"kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261539 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261574 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261596 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261631 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261700 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgwgj\" (UniqueName: \"kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261793 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261811 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.261890 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.265454 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.266542 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.267227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.267917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.268183 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.268972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.269679 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.270875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.270942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.271363 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.276671 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.311405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgwgj\" (UniqueName: \"kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj\") pod \"dnsmasq-dns-6f6f8cb849-5zpnv\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.324168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgk9l\" (UniqueName: \"kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363292 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363577 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363646 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhv7\" (UniqueName: \"kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.363947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.364077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.364674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.364703 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.370426 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.377262 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.382140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.382830 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhv7\" (UniqueName: \"kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.387031 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.395907 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7kllv"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.418590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.523861 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.559939 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.615395 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.644451 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:11:53 crc kubenswrapper[4985]: W0127 09:11:53.665855 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a55e00_a92c_468e_b440_72254c05314e.slice/crio-c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653 WatchSource:0}: Error finding container c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653: Status 404 returned error can't find the container with id c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653 Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.704498 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ftb9l"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.841628 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-95tb6"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.858933 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.939189 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerStarted","Data":"c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.945400 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" event={"ID":"5ac7374e-66ef-46cc-a255-4a3ab7921683","Type":"ContainerStarted","Data":"a826a5b85135d334a426a1cfa5a1ddad0cada495e4a437d38030af5a7cc178b0"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.947759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-95tb6" event={"ID":"31214ba8-5f89-4b54-9293-b6cd43c8cbe5","Type":"ContainerStarted","Data":"18165cbf4587c008a4763a3d6053a2ca7948d9ce4b44869703ea8cca85469b5f"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.950654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerStarted","Data":"b3e1976e14cbb2f826cafadd1ceafb3a0e01448c55b2fa88a4ed46415c88cb55"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.956082 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ftb9l" event={"ID":"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f","Type":"ContainerStarted","Data":"0df5e3ced9c6fd951b7af8631fde3b1451b716e2d987e626b8d30c95e69cfc1f"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.959785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7kllv" event={"ID":"26a7c987-a27e-4859-be07-7929331e3614","Type":"ContainerStarted","Data":"337355b3bdead0d68703612bb2fa2fc82d176b8145527224b7347c4621c3d8fd"} Jan 27 09:11:53 crc kubenswrapper[4985]: I0127 09:11:53.959842 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="dnsmasq-dns" containerID="cri-o://84e9536ae1200a93ccf94064949e4aadd1f0f6e3cbfb370fa9a4b2457d1479cb" gracePeriod=10 Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.037757 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:11:54 crc kubenswrapper[4985]: W0127 09:11:54.044311 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-41812e8b7a47e2f63883954c7a10bd1bf6e1122144e0c15bbf3a4d435bfaba88 WatchSource:0}: Error finding container 41812e8b7a47e2f63883954c7a10bd1bf6e1122144e0c15bbf3a4d435bfaba88: Status 404 returned error can't find the container with id 41812e8b7a47e2f63883954c7a10bd1bf6e1122144e0c15bbf3a4d435bfaba88 Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.114064 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5tmw8"] Jan 27 09:11:54 crc kubenswrapper[4985]: W0127 09:11:54.114214 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2de0653_57ca_4d6b_a8a7_10b39b9c4678.slice/crio-e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c WatchSource:0}: Error finding container e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c: Status 404 returned error can't find the container with id e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.184437 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.293227 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.431073 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:11:54 crc kubenswrapper[4985]: W0127 09:11:54.431351 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b134b4_9d95_4ef4_91c2_e8d9cf5357ee.slice/crio-9664d9281a0dbde002ed7e9c175d821f71ce9d167542fbf79ad4406c503e4650 WatchSource:0}: Error finding container 9664d9281a0dbde002ed7e9c175d821f71ce9d167542fbf79ad4406c503e4650: Status 404 returned error can't find the container with id 9664d9281a0dbde002ed7e9c175d821f71ce9d167542fbf79ad4406c503e4650 Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.554637 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pj5kt"] Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.557239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.565049 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.565359 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.567006 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6wfrq" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.571240 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pj5kt"] Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739208 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2t2l\" (UniqueName: \"kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739266 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.739342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.841959 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.842090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.842128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.842161 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2t2l\" (UniqueName: \"kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.842490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.842592 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.843143 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.849064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.849583 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.850980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.852729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.865624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2t2l\" (UniqueName: \"kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l\") pod \"cinder-db-sync-pj5kt\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.903746 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.984326 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerStarted","Data":"9664d9281a0dbde002ed7e9c175d821f71ce9d167542fbf79ad4406c503e4650"} Jan 27 09:11:54 crc kubenswrapper[4985]: I0127 09:11:54.990404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tmw8" event={"ID":"c2de0653-57ca-4d6b-a8a7-10b39b9c4678","Type":"ContainerStarted","Data":"e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.019349 4985 generic.go:334] "Generic (PLEG): container finished" podID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerID="84e9536ae1200a93ccf94064949e4aadd1f0f6e3cbfb370fa9a4b2457d1479cb" exitCode=0 Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.019462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" event={"ID":"7f5e2b3c-566f-48ed-974d-b517c4a98596","Type":"ContainerDied","Data":"84e9536ae1200a93ccf94064949e4aadd1f0f6e3cbfb370fa9a4b2457d1479cb"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.021894 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerStarted","Data":"9ab79fe3ea926ebea4cc80846fe8c5be79e9a7761fbc5722b62bd6570dff2db5"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.034311 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerStarted","Data":"41812e8b7a47e2f63883954c7a10bd1bf6e1122144e0c15bbf3a4d435bfaba88"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.052460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerStarted","Data":"6b60d257c1ac0842c8b872b99ee076f3c5f80d1dc4ba47b00e57eacab7d834eb"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.052562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerStarted","Data":"0b2a180693bf5a0f7a75d207a809852cb9532425b9bbddd07ff2f95a1604328e"} Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.435586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pj5kt"] Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.547846 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.680682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.680988 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.681055 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.681191 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmvkd\" (UniqueName: \"kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.681294 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.681382 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc\") pod \"7f5e2b3c-566f-48ed-974d-b517c4a98596\" (UID: \"7f5e2b3c-566f-48ed-974d-b517c4a98596\") " Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.719710 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.742705 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.750753 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd" (OuterVolumeSpecName: "kube-api-access-gmvkd") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "kube-api-access-gmvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.799215 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmvkd\" (UniqueName: \"kubernetes.io/projected/7f5e2b3c-566f-48ed-974d-b517c4a98596-kube-api-access-gmvkd\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.841846 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:11:55 crc kubenswrapper[4985]: E0127 09:11:55.842284 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="dnsmasq-dns" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.842301 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="dnsmasq-dns" Jan 27 09:11:55 crc kubenswrapper[4985]: E0127 09:11:55.842328 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="init" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.842336 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="init" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.842544 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" containerName="dnsmasq-dns" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.843469 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.860695 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.884254 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.890094 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config" (OuterVolumeSpecName: "config") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.893306 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.902625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6tf\" (UniqueName: \"kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.902691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.902763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.902888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.902940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.903008 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.903019 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.903029 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.912090 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.922634 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f5e2b3c-566f-48ed-974d-b517c4a98596" (UID: "7f5e2b3c-566f-48ed-974d-b517c4a98596"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.960579 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:11:55 crc kubenswrapper[4985]: I0127 09:11:55.995466 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006626 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz6tf\" (UniqueName: \"kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006903 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.006914 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5e2b3c-566f-48ed-974d-b517c4a98596-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.019585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.023588 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.024092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.032072 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.075211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz6tf\" (UniqueName: \"kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf\") pod \"horizon-77f8b4b57c-5gfx6\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.127306 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ftb9l" event={"ID":"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f","Type":"ContainerStarted","Data":"7e3000bd0d621e6f2521fa44a6c2e25b1bed217a4934b50a0ab211296390fece"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.154140 4985 generic.go:334] "Generic (PLEG): container finished" podID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerID="6b60d257c1ac0842c8b872b99ee076f3c5f80d1dc4ba47b00e57eacab7d834eb" exitCode=0 Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.154202 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerDied","Data":"6b60d257c1ac0842c8b872b99ee076f3c5f80d1dc4ba47b00e57eacab7d834eb"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.219229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.220022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7kllv" event={"ID":"26a7c987-a27e-4859-be07-7929331e3614","Type":"ContainerStarted","Data":"62b784fa6f3f9b9a0c3f1e0f989a293f5b6f100551d74bbef6451f711e94cd2f"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.263744 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ftb9l" podStartSLOduration=4.263721746 podStartE2EDuration="4.263721746s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:56.161120112 +0000 UTC m=+1100.452214943" watchObservedRunningTime="2026-01-27 09:11:56.263721746 +0000 UTC m=+1100.554816587" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.265549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerStarted","Data":"941e76e91cf3eb11b26ddd581c58a8373189d923dd5f0b5691ad29da36e4600b"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.276176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" event={"ID":"7f5e2b3c-566f-48ed-974d-b517c4a98596","Type":"ContainerDied","Data":"252aa4e965c2cf82b1d28c04867d8001ab99360758ff2c81396e5e375049430f"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.276255 4985 scope.go:117] "RemoveContainer" containerID="84e9536ae1200a93ccf94064949e4aadd1f0f6e3cbfb370fa9a4b2457d1479cb" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.276722 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-kt62t" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.288595 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7kllv" podStartSLOduration=4.288573598 podStartE2EDuration="4.288573598s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:56.285310708 +0000 UTC m=+1100.576405549" watchObservedRunningTime="2026-01-27 09:11:56.288573598 +0000 UTC m=+1100.579668439" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.307850 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pj5kt" event={"ID":"a9c4f8a3-0f30-4724-84bd-952a5d5170cb","Type":"ContainerStarted","Data":"b6e69775d16d19939bd39a83b98223d080b2c4d88aeec2871d69daae88958025"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.313496 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerStarted","Data":"ce55e76e2ed669c1af4dfbbb587a4d77916795dafcb28c1d9099be441a48ae59"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.325530 4985 generic.go:334] "Generic (PLEG): container finished" podID="5ac7374e-66ef-46cc-a255-4a3ab7921683" containerID="63005c52141fa74ef567689f3e00e5527e50eb2f55dd10ece9520a938d565013" exitCode=0 Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.325613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" event={"ID":"5ac7374e-66ef-46cc-a255-4a3ab7921683","Type":"ContainerDied","Data":"63005c52141fa74ef567689f3e00e5527e50eb2f55dd10ece9520a938d565013"} Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.371053 4985 scope.go:117] "RemoveContainer" containerID="6cec3af15c3483536501e0c26d28c5d28592c97669ec287d4b8e7eb3a5f5e156" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.379730 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.404597 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-kt62t"] Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.502024 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5e2b3c-566f-48ed-974d-b517c4a98596" path="/var/lib/kubelet/pods/7f5e2b3c-566f-48ed-974d-b517c4a98596/volumes" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.840398 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.948530 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.969490 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.981989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.982418 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.982587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ssgj\" (UniqueName: \"kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.982666 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:56 crc kubenswrapper[4985]: I0127 09:11:56.982760 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb\") pod \"5ac7374e-66ef-46cc-a255-4a3ab7921683\" (UID: \"5ac7374e-66ef-46cc-a255-4a3ab7921683\") " Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.010132 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj" (OuterVolumeSpecName: "kube-api-access-6ssgj") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "kube-api-access-6ssgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.033837 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.043531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config" (OuterVolumeSpecName: "config") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.046402 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.051414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.064381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ac7374e-66ef-46cc-a255-4a3ab7921683" (UID: "5ac7374e-66ef-46cc-a255-4a3ab7921683"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086210 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086241 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086251 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086263 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ssgj\" (UniqueName: \"kubernetes.io/projected/5ac7374e-66ef-46cc-a255-4a3ab7921683-kube-api-access-6ssgj\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086273 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.086281 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ac7374e-66ef-46cc-a255-4a3ab7921683-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.353382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerStarted","Data":"7452e0fb0c38f7b99ff62e15947722d9fd8ae91dbbc2613df0ed7356f79d1e3c"} Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.353624 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-log" containerID="cri-o://ce55e76e2ed669c1af4dfbbb587a4d77916795dafcb28c1d9099be441a48ae59" gracePeriod=30 Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.354337 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-httpd" containerID="cri-o://7452e0fb0c38f7b99ff62e15947722d9fd8ae91dbbc2613df0ed7356f79d1e3c" gracePeriod=30 Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.369192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" event={"ID":"5ac7374e-66ef-46cc-a255-4a3ab7921683","Type":"ContainerDied","Data":"a826a5b85135d334a426a1cfa5a1ddad0cada495e4a437d38030af5a7cc178b0"} Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.369279 4985 scope.go:117] "RemoveContainer" containerID="63005c52141fa74ef567689f3e00e5527e50eb2f55dd10ece9520a938d565013" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.369475 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kkc4g" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.389715 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.3896876670000005 podStartE2EDuration="5.389687667s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:57.38364605 +0000 UTC m=+1101.674740901" watchObservedRunningTime="2026-01-27 09:11:57.389687667 +0000 UTC m=+1101.680782508" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.397143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerStarted","Data":"c0fc1056214cb336c7046ec26a7500a41729c3a7889b2d8d02e295878199dea2"} Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.398040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.432229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerStarted","Data":"2a70e9d24c2800aabf93213bd1f8b0ce245aba8ca1a3b1ece27b43a62dffa8d1"} Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.432307 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-log" containerID="cri-o://941e76e91cf3eb11b26ddd581c58a8373189d923dd5f0b5691ad29da36e4600b" gracePeriod=30 Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.432368 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-httpd" containerID="cri-o://2a70e9d24c2800aabf93213bd1f8b0ce245aba8ca1a3b1ece27b43a62dffa8d1" gracePeriod=30 Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.459712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerStarted","Data":"1bbe3941aeed534fc6431db008d41f0384b3e672a4250415d13b174ae0445c78"} Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.462791 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" podStartSLOduration=5.462760091 podStartE2EDuration="5.462760091s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:57.447753028 +0000 UTC m=+1101.738847889" watchObservedRunningTime="2026-01-27 09:11:57.462760091 +0000 UTC m=+1101.753854932" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.490648 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.490623715 podStartE2EDuration="5.490623715s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:11:57.484246999 +0000 UTC m=+1101.775341840" watchObservedRunningTime="2026-01-27 09:11:57.490623715 +0000 UTC m=+1101.781718556" Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.642893 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:57 crc kubenswrapper[4985]: I0127 09:11:57.649043 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kkc4g"] Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.466620 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac7374e-66ef-46cc-a255-4a3ab7921683" path="/var/lib/kubelet/pods/5ac7374e-66ef-46cc-a255-4a3ab7921683/volumes" Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.485711 4985 generic.go:334] "Generic (PLEG): container finished" podID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerID="2a70e9d24c2800aabf93213bd1f8b0ce245aba8ca1a3b1ece27b43a62dffa8d1" exitCode=0 Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.485755 4985 generic.go:334] "Generic (PLEG): container finished" podID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerID="941e76e91cf3eb11b26ddd581c58a8373189d923dd5f0b5691ad29da36e4600b" exitCode=143 Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.485794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerDied","Data":"2a70e9d24c2800aabf93213bd1f8b0ce245aba8ca1a3b1ece27b43a62dffa8d1"} Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.485820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerDied","Data":"941e76e91cf3eb11b26ddd581c58a8373189d923dd5f0b5691ad29da36e4600b"} Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.488900 4985 generic.go:334] "Generic (PLEG): container finished" podID="9d624139-0daf-4992-9aa9-82305991e2b0" containerID="7452e0fb0c38f7b99ff62e15947722d9fd8ae91dbbc2613df0ed7356f79d1e3c" exitCode=0 Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.488934 4985 generic.go:334] "Generic (PLEG): container finished" podID="9d624139-0daf-4992-9aa9-82305991e2b0" containerID="ce55e76e2ed669c1af4dfbbb587a4d77916795dafcb28c1d9099be441a48ae59" exitCode=143 Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.489163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerDied","Data":"7452e0fb0c38f7b99ff62e15947722d9fd8ae91dbbc2613df0ed7356f79d1e3c"} Jan 27 09:11:58 crc kubenswrapper[4985]: I0127 09:11:58.489196 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerDied","Data":"ce55e76e2ed669c1af4dfbbb587a4d77916795dafcb28c1d9099be441a48ae59"} Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.277722 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.313873 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:12:02 crc kubenswrapper[4985]: E0127 09:12:02.314418 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac7374e-66ef-46cc-a255-4a3ab7921683" containerName="init" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.314448 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac7374e-66ef-46cc-a255-4a3ab7921683" containerName="init" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.314767 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac7374e-66ef-46cc-a255-4a3ab7921683" containerName="init" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.315989 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.327242 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.327303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.393000 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.429002 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69b99cb974-fzls4"] Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.430608 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.437236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.440255 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69b99cb974-fzls4"] Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.437319 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.443782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.443968 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.444041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.444265 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnxpm\" (UniqueName: \"kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.444335 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.548407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.553989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-secret-key\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554081 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-scripts\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554172 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnxpm\" (UniqueName: \"kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554236 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554270 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-combined-ca-bundle\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554480 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-logs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554500 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmvz\" (UniqueName: \"kubernetes.io/projected/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-kube-api-access-fvmvz\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554535 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-config-data\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554774 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-tls-certs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.554795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.570433 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.571819 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.574816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.575807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.576848 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.578069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.578082 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnxpm\" (UniqueName: \"kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm\") pod \"horizon-5c57bbbf74-nrsd9\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.643873 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657574 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-combined-ca-bundle\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-logs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657754 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmvz\" (UniqueName: \"kubernetes.io/projected/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-kube-api-access-fvmvz\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657831 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-config-data\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657894 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-tls-certs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-secret-key\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.657957 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-scripts\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.659639 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-scripts\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.663077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-logs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.664100 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-config-data\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.672922 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-tls-certs\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.673085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-horizon-secret-key\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.681288 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-combined-ca-bundle\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.685289 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmvz\" (UniqueName: \"kubernetes.io/projected/24f5c0ab-206b-4a03-9e4b-c94feff53f9e-kube-api-access-fvmvz\") pod \"horizon-69b99cb974-fzls4\" (UID: \"24f5c0ab-206b-4a03-9e4b-c94feff53f9e\") " pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:02 crc kubenswrapper[4985]: I0127 09:12:02.763132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:03 crc kubenswrapper[4985]: I0127 09:12:03.562280 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:12:03 crc kubenswrapper[4985]: I0127 09:12:03.622719 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:12:03 crc kubenswrapper[4985]: I0127 09:12:03.625976 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" containerID="cri-o://4aa54a66c75ee82da6dcdbdbf76f293799fd6714929d63fbd73fc6d57a111a10" gracePeriod=10 Jan 27 09:12:04 crc kubenswrapper[4985]: I0127 09:12:04.606176 4985 generic.go:334] "Generic (PLEG): container finished" podID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerID="4aa54a66c75ee82da6dcdbdbf76f293799fd6714929d63fbd73fc6d57a111a10" exitCode=0 Jan 27 09:12:04 crc kubenswrapper[4985]: I0127 09:12:04.606670 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" event={"ID":"dd86fe7b-977d-481c-bf72-c651287d4ca9","Type":"ContainerDied","Data":"4aa54a66c75ee82da6dcdbdbf76f293799fd6714929d63fbd73fc6d57a111a10"} Jan 27 09:12:04 crc kubenswrapper[4985]: I0127 09:12:04.609772 4985 generic.go:334] "Generic (PLEG): container finished" podID="26a7c987-a27e-4859-be07-7929331e3614" containerID="62b784fa6f3f9b9a0c3f1e0f989a293f5b6f100551d74bbef6451f711e94cd2f" exitCode=0 Jan 27 09:12:04 crc kubenswrapper[4985]: I0127 09:12:04.609822 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7kllv" event={"ID":"26a7c987-a27e-4859-be07-7929331e3614","Type":"ContainerDied","Data":"62b784fa6f3f9b9a0c3f1e0f989a293f5b6f100551d74bbef6451f711e94cd2f"} Jan 27 09:12:08 crc kubenswrapper[4985]: I0127 09:12:08.593961 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 27 09:12:11 crc kubenswrapper[4985]: E0127 09:12:11.390698 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 27 09:12:11 crc kubenswrapper[4985]: E0127 09:12:11.392205 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px878,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-5tmw8_openstack(c2de0653-57ca-4d6b-a8a7-10b39b9c4678): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:12:11 crc kubenswrapper[4985]: E0127 09:12:11.393716 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-5tmw8" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.490205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.606583 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.606974 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.607005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.607063 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.607116 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.607271 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btbnc\" (UniqueName: \"kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc\") pod \"26a7c987-a27e-4859-be07-7929331e3614\" (UID: \"26a7c987-a27e-4859-be07-7929331e3614\") " Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.619248 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc" (OuterVolumeSpecName: "kube-api-access-btbnc") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "kube-api-access-btbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.626426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts" (OuterVolumeSpecName: "scripts") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.627723 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.638341 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data" (OuterVolumeSpecName: "config-data") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.639969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.642257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26a7c987-a27e-4859-be07-7929331e3614" (UID: "26a7c987-a27e-4859-be07-7929331e3614"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.688669 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7kllv" event={"ID":"26a7c987-a27e-4859-be07-7929331e3614","Type":"ContainerDied","Data":"337355b3bdead0d68703612bb2fa2fc82d176b8145527224b7347c4621c3d8fd"} Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.688740 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337355b3bdead0d68703612bb2fa2fc82d176b8145527224b7347c4621c3d8fd" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.689296 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7kllv" Jan 27 09:12:11 crc kubenswrapper[4985]: E0127 09:12:11.690541 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-5tmw8" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.709911 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.710066 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.710083 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.710091 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.710099 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a7c987-a27e-4859-be07-7929331e3614-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: I0127 09:12:11.710111 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btbnc\" (UniqueName: \"kubernetes.io/projected/26a7c987-a27e-4859-be07-7929331e3614-kube-api-access-btbnc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4985]: E0127 09:12:11.969419 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26a7c987_a27e_4859_be07_7929331e3614.slice/crio-337355b3bdead0d68703612bb2fa2fc82d176b8145527224b7347c4621c3d8fd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26a7c987_a27e_4859_be07_7929331e3614.slice\": RecentStats: unable to find data in memory cache]" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.618301 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7kllv"] Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.629529 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7kllv"] Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.697027 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pj7x6"] Jan 27 09:12:12 crc kubenswrapper[4985]: E0127 09:12:12.697935 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a7c987-a27e-4859-be07-7929331e3614" containerName="keystone-bootstrap" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.698029 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a7c987-a27e-4859-be07-7929331e3614" containerName="keystone-bootstrap" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.698285 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a7c987-a27e-4859-be07-7929331e3614" containerName="keystone-bootstrap" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.699060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.704317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.704915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.705909 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.706077 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xr2ld" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.712905 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.717982 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pj7x6"] Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737764 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737897 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdkf\" (UniqueName: \"kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737960 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.737998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839201 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839263 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdkf\" (UniqueName: \"kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839354 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.839641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.921124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.922087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.921364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.921461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.922063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdkf\" (UniqueName: \"kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:12 crc kubenswrapper[4985]: I0127 09:12:12.921991 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys\") pod \"keystone-bootstrap-pj7x6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:13 crc kubenswrapper[4985]: I0127 09:12:13.024312 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:14 crc kubenswrapper[4985]: I0127 09:12:14.465694 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a7c987-a27e-4859-be07-7929331e3614" path="/var/lib/kubelet/pods/26a7c987-a27e-4859-be07-7929331e3614/volumes" Jan 27 09:12:18 crc kubenswrapper[4985]: I0127 09:12:18.594011 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 27 09:12:19 crc kubenswrapper[4985]: I0127 09:12:19.787529 4985 generic.go:334] "Generic (PLEG): container finished" podID="82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" containerID="7e3000bd0d621e6f2521fa44a6c2e25b1bed217a4934b50a0ab211296390fece" exitCode=0 Jan 27 09:12:19 crc kubenswrapper[4985]: I0127 09:12:19.787563 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ftb9l" event={"ID":"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f","Type":"ContainerDied","Data":"7e3000bd0d621e6f2521fa44a6c2e25b1bed217a4934b50a0ab211296390fece"} Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.256754 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.269318 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393384 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393417 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgk9l\" (UniqueName: \"kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393463 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393580 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393605 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393657 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393723 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393743 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run\") pod \"9d624139-0daf-4992-9aa9-82305991e2b0\" (UID: \"9d624139-0daf-4992-9aa9-82305991e2b0\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfhv7\" (UniqueName: \"kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.393782 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts\") pod \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\" (UID: \"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee\") " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.394845 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs" (OuterVolumeSpecName: "logs") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.395462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.395743 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.396112 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs" (OuterVolumeSpecName: "logs") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.400908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts" (OuterVolumeSpecName: "scripts") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.401270 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.402543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.402957 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7" (OuterVolumeSpecName: "kube-api-access-gfhv7") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "kube-api-access-gfhv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.414783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts" (OuterVolumeSpecName: "scripts") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.414828 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l" (OuterVolumeSpecName: "kube-api-access-sgk9l") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "kube-api-access-sgk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.431099 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.451977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.464417 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data" (OuterVolumeSpecName: "config-data") pod "9d624139-0daf-4992-9aa9-82305991e2b0" (UID: "9d624139-0daf-4992-9aa9-82305991e2b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.465061 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data" (OuterVolumeSpecName: "config-data") pod "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" (UID: "75b134b4-9d95-4ef4-91c2-e8d9cf5357ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496630 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496688 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496705 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496717 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgk9l\" (UniqueName: \"kubernetes.io/projected/9d624139-0daf-4992-9aa9-82305991e2b0-kube-api-access-sgk9l\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496729 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496736 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.496750 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497591 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d624139-0daf-4992-9aa9-82305991e2b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497872 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497884 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497894 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497904 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9d624139-0daf-4992-9aa9-82305991e2b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497915 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfhv7\" (UniqueName: \"kubernetes.io/projected/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-kube-api-access-gfhv7\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.497924 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.515920 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.520805 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.595418 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.595593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.599243 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.599268 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.735613 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.735992 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h7ch98h5ddh58ch667h8ch585h65ch654h56dh557hcbh96h5b4h589h665h657h54h7ch65fh5d5h56ch565hch598h5c7h676h95h657h646h585q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv7fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(61437724-d73d-4fe5-afbc-b4994d1eda63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.838058 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9d624139-0daf-4992-9aa9-82305991e2b0","Type":"ContainerDied","Data":"9ab79fe3ea926ebea4cc80846fe8c5be79e9a7761fbc5722b62bd6570dff2db5"} Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.838154 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.838350 4985 scope.go:117] "RemoveContainer" containerID="7452e0fb0c38f7b99ff62e15947722d9fd8ae91dbbc2613df0ed7356f79d1e3c" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.839841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b134b4-9d95-4ef4-91c2-e8d9cf5357ee","Type":"ContainerDied","Data":"9664d9281a0dbde002ed7e9c175d821f71ce9d167542fbf79ad4406c503e4650"} Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.839940 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.896293 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.923860 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.940030 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.949308 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.962448 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.962875 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.962900 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.962913 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.962920 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.962929 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.962937 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: E0127 09:12:23.962971 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.962977 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.963155 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.963171 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.963188 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-httpd" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.963198 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" containerName="glance-log" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.964216 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.966066 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tq9nh" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.966562 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.966955 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.969413 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.974596 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.976495 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.978623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.978810 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.987288 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:12:23 crc kubenswrapper[4985]: I0127 09:12:23.998411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109120 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109351 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109501 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109551 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109578 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njg7f\" (UniqueName: \"kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109731 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109764 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109792 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109815 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.109861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.110112 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmw8\" (UniqueName: \"kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.110160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212418 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212499 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212537 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212599 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njg7f\" (UniqueName: \"kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212678 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212745 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212765 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212813 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmw8\" (UniqueName: \"kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212922 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212967 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.212998 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.213608 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.214073 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.214786 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.217079 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.217103 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.217350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.219829 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.221013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.222079 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.223997 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.224307 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.234389 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.235065 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.237464 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.237914 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njg7f\" (UniqueName: \"kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.238340 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmw8\" (UniqueName: \"kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.258881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.269890 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.292854 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.309137 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:12:24 crc kubenswrapper[4985]: E0127 09:12:24.461850 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 27 09:12:24 crc kubenswrapper[4985]: E0127 09:12:24.462019 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z46sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-95tb6_openstack(31214ba8-5f89-4b54-9293-b6cd43c8cbe5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:12:24 crc kubenswrapper[4985]: E0127 09:12:24.464158 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-95tb6" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.466503 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b134b4-9d95-4ef4-91c2-e8d9cf5357ee" path="/var/lib/kubelet/pods/75b134b4-9d95-4ef4-91c2-e8d9cf5357ee/volumes" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.469244 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d624139-0daf-4992-9aa9-82305991e2b0" path="/var/lib/kubelet/pods/9d624139-0daf-4992-9aa9-82305991e2b0/volumes" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.540148 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.545860 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622366 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tzx7\" (UniqueName: \"kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7\") pod \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622458 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config\") pod \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb\") pod \"dd86fe7b-977d-481c-bf72-c651287d4ca9\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622593 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config\") pod \"dd86fe7b-977d-481c-bf72-c651287d4ca9\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp9hw\" (UniqueName: \"kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw\") pod \"dd86fe7b-977d-481c-bf72-c651287d4ca9\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb\") pod \"dd86fe7b-977d-481c-bf72-c651287d4ca9\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622685 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc\") pod \"dd86fe7b-977d-481c-bf72-c651287d4ca9\" (UID: \"dd86fe7b-977d-481c-bf72-c651287d4ca9\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.622716 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle\") pod \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\" (UID: \"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f\") " Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.629030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7" (OuterVolumeSpecName: "kube-api-access-9tzx7") pod "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" (UID: "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f"). InnerVolumeSpecName "kube-api-access-9tzx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.629796 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw" (OuterVolumeSpecName: "kube-api-access-fp9hw") pod "dd86fe7b-977d-481c-bf72-c651287d4ca9" (UID: "dd86fe7b-977d-481c-bf72-c651287d4ca9"). InnerVolumeSpecName "kube-api-access-fp9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.660748 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config" (OuterVolumeSpecName: "config") pod "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" (UID: "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.669384 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" (UID: "82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.678318 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd86fe7b-977d-481c-bf72-c651287d4ca9" (UID: "dd86fe7b-977d-481c-bf72-c651287d4ca9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.687818 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd86fe7b-977d-481c-bf72-c651287d4ca9" (UID: "dd86fe7b-977d-481c-bf72-c651287d4ca9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.699026 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd86fe7b-977d-481c-bf72-c651287d4ca9" (UID: "dd86fe7b-977d-481c-bf72-c651287d4ca9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.704862 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config" (OuterVolumeSpecName: "config") pod "dd86fe7b-977d-481c-bf72-c651287d4ca9" (UID: "dd86fe7b-977d-481c-bf72-c651287d4ca9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.724977 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tzx7\" (UniqueName: \"kubernetes.io/projected/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-kube-api-access-9tzx7\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725014 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725024 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725033 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725044 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp9hw\" (UniqueName: \"kubernetes.io/projected/dd86fe7b-977d-481c-bf72-c651287d4ca9-kube-api-access-fp9hw\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725052 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725062 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd86fe7b-977d-481c-bf72-c651287d4ca9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.725071 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.863856 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" event={"ID":"dd86fe7b-977d-481c-bf72-c651287d4ca9","Type":"ContainerDied","Data":"899651292b256eb0062ee10727f28805fc7ef2aee8915c6906d4080aa03c5e1b"} Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.863929 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.867795 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ftb9l" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.868560 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ftb9l" event={"ID":"82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f","Type":"ContainerDied","Data":"0df5e3ced9c6fd951b7af8631fde3b1451b716e2d987e626b8d30c95e69cfc1f"} Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.868591 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df5e3ced9c6fd951b7af8631fde3b1451b716e2d987e626b8d30c95e69cfc1f" Jan 27 09:12:24 crc kubenswrapper[4985]: E0127 09:12:24.870176 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-95tb6" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.936121 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:12:24 crc kubenswrapper[4985]: I0127 09:12:24.947949 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-l6c45"] Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.748619 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.749205 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2t2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pj5kt_openstack(a9c4f8a3-0f30-4724-84bd-952a5d5170cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.750684 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pj5kt" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.793119 4985 scope.go:117] "RemoveContainer" containerID="ce55e76e2ed669c1af4dfbbb587a4d77916795dafcb28c1d9099be441a48ae59" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.844796 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.845181 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" containerName="neutron-db-sync" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.845199 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" containerName="neutron-db-sync" Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.845221 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.845227 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.845238 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="init" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.845244 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="init" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.845393 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" containerName="neutron-db-sync" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.845409 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.846313 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.906633 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:12:25 crc kubenswrapper[4985]: E0127 09:12:25.930795 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-pj5kt" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953541 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953593 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6t6\" (UniqueName: \"kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:25 crc kubenswrapper[4985]: I0127 09:12:25.953853 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.003138 4985 scope.go:117] "RemoveContainer" containerID="2a70e9d24c2800aabf93213bd1f8b0ce245aba8ca1a3b1ece27b43a62dffa8d1" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.012115 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.014287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.019527 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.022219 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.022889 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w2rf5" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.024009 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.046593 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055764 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxlb\" (UniqueName: \"kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055845 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.055956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.056009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.056042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.056117 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6t6\" (UniqueName: \"kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.056153 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.057500 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.058780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.059300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.059377 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.059534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.109050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6t6\" (UniqueName: \"kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6\") pod \"dnsmasq-dns-685444497c-q8nbf\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.163609 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.163708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxlb\" (UniqueName: \"kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.163743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.163760 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.169223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.172743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.173064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.179445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.182626 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.193924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxlb\" (UniqueName: \"kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb\") pod \"neutron-85bdc684db-7q85p\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.198882 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.208217 4985 scope.go:117] "RemoveContainer" containerID="941e76e91cf3eb11b26ddd581c58a8373189d923dd5f0b5691ad29da36e4600b" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.400790 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.436671 4985 scope.go:117] "RemoveContainer" containerID="4aa54a66c75ee82da6dcdbdbf76f293799fd6714929d63fbd73fc6d57a111a10" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.496851 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" path="/var/lib/kubelet/pods/dd86fe7b-977d-481c-bf72-c651287d4ca9/volumes" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.590843 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69b99cb974-fzls4"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.643939 4985 scope.go:117] "RemoveContainer" containerID="7b484b12b47ba2a609743adbb990faeb6d438b9b3504da74864105562d6d2e13" Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.877796 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.915768 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pj7x6"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.930239 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerStarted","Data":"a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e"} Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.936383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerStarted","Data":"910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9"} Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.984086 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.986170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b99cb974-fzls4" event={"ID":"24f5c0ab-206b-4a03-9e4b-c94feff53f9e","Type":"ContainerStarted","Data":"1bee48ba31a24c9edddea184c58aa8312174a9392a7212323287f9f027564ce4"} Jan 27 09:12:26 crc kubenswrapper[4985]: I0127 09:12:26.989876 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerStarted","Data":"d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9"} Jan 27 09:12:27 crc kubenswrapper[4985]: I0127 09:12:27.109360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:12:27 crc kubenswrapper[4985]: I0127 09:12:27.148941 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:12:27 crc kubenswrapper[4985]: W0127 09:12:27.170837 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod635ffd32_9e1e_48a9_8560_36e92db872ee.slice/crio-114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9 WatchSource:0}: Error finding container 114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9: Status 404 returned error can't find the container with id 114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9 Jan 27 09:12:27 crc kubenswrapper[4985]: I0127 09:12:27.419372 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.024194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerStarted","Data":"49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.024426 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-779ccf4965-4dzg4" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon-log" containerID="cri-o://910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.024688 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-779ccf4965-4dzg4" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon" containerID="cri-o://49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.028010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerStarted","Data":"e16ecc5391723ec866b22379c3eff871778d1029f7535362b6bf0ab919a57d0c"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.028047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerStarted","Data":"bc4c0a1d1bc9d272d56b4ebccd3ddd9d2e7528621bea617664c16b676e385638"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.034123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerStarted","Data":"f6084fc3fc50ec8a1b5fdfe46a41e503fd6fd270dcfd281700151e1020b92487"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.059316 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pj7x6" event={"ID":"42a714c1-196c-4f83-b457-83847e9e97a6","Type":"ContainerStarted","Data":"191a820ded102b1df0ce5041493282d9c44a28b8671275e904fb00a92138d524"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.059371 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pj7x6" event={"ID":"42a714c1-196c-4f83-b457-83847e9e97a6","Type":"ContainerStarted","Data":"899bbb3ce6c8179b05c7d60a05ee4f18da3ec0589cbf4488bde72f13249e60bf"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.070226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerStarted","Data":"29bb41cd322172c3273a0c5bbcd8a796d8dab5adc02c2f450f03d9a94434a1e1"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.077498 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-779ccf4965-4dzg4" podStartSLOduration=5.5178715480000005 podStartE2EDuration="36.077474648s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="2026-01-27 09:11:53.875803252 +0000 UTC m=+1098.166898093" lastFinishedPulling="2026-01-27 09:12:24.435406352 +0000 UTC m=+1128.726501193" observedRunningTime="2026-01-27 09:12:28.05930442 +0000 UTC m=+1132.350399261" watchObservedRunningTime="2026-01-27 09:12:28.077474648 +0000 UTC m=+1132.368569489" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.088073 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pj7x6" podStartSLOduration=16.088052698 podStartE2EDuration="16.088052698s" podCreationTimestamp="2026-01-27 09:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:28.078813985 +0000 UTC m=+1132.369908826" watchObservedRunningTime="2026-01-27 09:12:28.088052698 +0000 UTC m=+1132.379147539" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.095638 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8nbf" event={"ID":"635ffd32-9e1e-48a9-8560-36e92db872ee","Type":"ContainerStarted","Data":"114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.128197 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tmw8" event={"ID":"c2de0653-57ca-4d6b-a8a7-10b39b9c4678","Type":"ContainerStarted","Data":"f2497146faac80913a880fec95fff751d9cce625b3fa3b1128ca4cefe1e6d6a8"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.130598 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b99cb974-fzls4" event={"ID":"24f5c0ab-206b-4a03-9e4b-c94feff53f9e","Type":"ContainerStarted","Data":"614e466a2d7500dc95e4f27bb927d1d46d7f7be03662d546a1d7b38840193649"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.142957 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8bc58698f-rrrdv" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon-log" containerID="cri-o://d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.143044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerStarted","Data":"caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.143088 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8bc58698f-rrrdv" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon" containerID="cri-o://caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.159347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerStarted","Data":"edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.159497 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77f8b4b57c-5gfx6" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon-log" containerID="cri-o://a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.159596 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77f8b4b57c-5gfx6" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon" containerID="cri-o://edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd" gracePeriod=30 Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.166381 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5tmw8" podStartSLOduration=4.017749832 podStartE2EDuration="36.166352846s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="2026-01-27 09:11:54.11729707 +0000 UTC m=+1098.408391911" lastFinishedPulling="2026-01-27 09:12:26.265900084 +0000 UTC m=+1130.556994925" observedRunningTime="2026-01-27 09:12:28.151839468 +0000 UTC m=+1132.442934309" watchObservedRunningTime="2026-01-27 09:12:28.166352846 +0000 UTC m=+1132.457447687" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.169052 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerStarted","Data":"37568ce2b910a694715cb83a4e8f2a5b3d64559714c87c8fe546368aad573b44"} Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.216261 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8bc58698f-rrrdv" podStartSLOduration=5.450333021 podStartE2EDuration="36.216245004s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="2026-01-27 09:11:53.669493449 +0000 UTC m=+1097.960588290" lastFinishedPulling="2026-01-27 09:12:24.435405432 +0000 UTC m=+1128.726500273" observedRunningTime="2026-01-27 09:12:28.183266319 +0000 UTC m=+1132.474361160" watchObservedRunningTime="2026-01-27 09:12:28.216245004 +0000 UTC m=+1132.507339845" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.596651 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-l6c45" podUID="dd86fe7b-977d-481c-bf72-c651287d4ca9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.599433 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77f8b4b57c-5gfx6" podStartSLOduration=4.746336968 podStartE2EDuration="33.599397262s" podCreationTimestamp="2026-01-27 09:11:55 +0000 UTC" firstStartedPulling="2026-01-27 09:11:56.9964106 +0000 UTC m=+1101.287505441" lastFinishedPulling="2026-01-27 09:12:25.849470904 +0000 UTC m=+1130.140565735" observedRunningTime="2026-01-27 09:12:28.208667696 +0000 UTC m=+1132.499762537" watchObservedRunningTime="2026-01-27 09:12:28.599397262 +0000 UTC m=+1132.890492103" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.608786 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.610711 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.613737 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.619507 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.652304 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745219 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ds2h\" (UniqueName: \"kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745313 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745619 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.745683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ds2h\" (UniqueName: \"kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848788 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848833 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848869 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.848926 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.915778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.917324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.918324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.918384 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.920859 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.928819 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:28 crc kubenswrapper[4985]: I0127 09:12:28.933918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ds2h\" (UniqueName: \"kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h\") pod \"neutron-594666745c-h8zcv\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.047619 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.210967 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerStarted","Data":"c485247ca576864cbc121bc64cf3b79601f66789ddb28ba9c38e4b3a30e079f9"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.222490 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerStarted","Data":"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.235460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerStarted","Data":"d4c75075010687e0dcce9874d61f77c496648b9a79868e5303597d3792b9a8a4"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.242784 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerStarted","Data":"23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.365750 4985 generic.go:334] "Generic (PLEG): container finished" podID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerID="7f1f756bf210c6df792b11aa33954026c0ec1705564e1adb57c28546d39bee61" exitCode=0 Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.365844 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8nbf" event={"ID":"635ffd32-9e1e-48a9-8560-36e92db872ee","Type":"ContainerDied","Data":"7f1f756bf210c6df792b11aa33954026c0ec1705564e1adb57c28546d39bee61"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.386307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerStarted","Data":"f685f5d57bf90797e6960a0da540e2156808e3702d029e5231792e91efc492ec"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.416178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b99cb974-fzls4" event={"ID":"24f5c0ab-206b-4a03-9e4b-c94feff53f9e","Type":"ContainerStarted","Data":"e04f08bac787c9509462cb33a26331d605f329a0e567e1857499aea059ea5614"} Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.486133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69b99cb974-fzls4" podStartSLOduration=27.48609801 podStartE2EDuration="27.48609801s" podCreationTimestamp="2026-01-27 09:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:29.477088834 +0000 UTC m=+1133.768183675" watchObservedRunningTime="2026-01-27 09:12:29.48609801 +0000 UTC m=+1133.777192871" Jan 27 09:12:29 crc kubenswrapper[4985]: I0127 09:12:29.599322 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c57bbbf74-nrsd9" podStartSLOduration=27.598362929 podStartE2EDuration="27.598362929s" podCreationTimestamp="2026-01-27 09:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:29.561980811 +0000 UTC m=+1133.853075652" watchObservedRunningTime="2026-01-27 09:12:29.598362929 +0000 UTC m=+1133.889457770" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.049385 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.437074 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerStarted","Data":"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03"} Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.441833 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerStarted","Data":"394ae4bea110de26a400aae6cff692aebdf9a580a6d0be99b158d72b3a809e63"} Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.467048 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerStarted","Data":"c8a7377bea9823b710e4fa053c3d56b96c5efbf32603a2894d0ecaaa3a7ea381"} Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.467091 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.469350 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8nbf" event={"ID":"635ffd32-9e1e-48a9-8560-36e92db872ee","Type":"ContainerStarted","Data":"abb1d2c7a620858ee2962d43c7b3c3f76d60afdf1e8c844a1749ce7974e81420"} Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.469877 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.491563 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerStarted","Data":"b8f5d57fdfa411a702d4ed40dde405c4c89d98dc4aafc021724c96b6944f981c"} Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.530282 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.530262037 podStartE2EDuration="7.530262037s" podCreationTimestamp="2026-01-27 09:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:30.491402892 +0000 UTC m=+1134.782497723" watchObservedRunningTime="2026-01-27 09:12:30.530262037 +0000 UTC m=+1134.821356878" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.537242 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85bdc684db-7q85p" podStartSLOduration=5.537219568 podStartE2EDuration="5.537219568s" podCreationTimestamp="2026-01-27 09:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:30.525566658 +0000 UTC m=+1134.816661499" watchObservedRunningTime="2026-01-27 09:12:30.537219568 +0000 UTC m=+1134.828314409" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.571159 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-q8nbf" podStartSLOduration=5.571138388 podStartE2EDuration="5.571138388s" podCreationTimestamp="2026-01-27 09:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:30.553468034 +0000 UTC m=+1134.844562875" watchObservedRunningTime="2026-01-27 09:12:30.571138388 +0000 UTC m=+1134.862233229" Jan 27 09:12:30 crc kubenswrapper[4985]: I0127 09:12:30.617873 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.617839739 podStartE2EDuration="7.617839739s" podCreationTimestamp="2026-01-27 09:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:30.581065311 +0000 UTC m=+1134.872160152" watchObservedRunningTime="2026-01-27 09:12:30.617839739 +0000 UTC m=+1134.908934580" Jan 27 09:12:31 crc kubenswrapper[4985]: I0127 09:12:31.505621 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerStarted","Data":"cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209"} Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.520720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerStarted","Data":"1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec"} Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.521083 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.552566 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-594666745c-h8zcv" podStartSLOduration=4.55254284 podStartE2EDuration="4.55254284s" podCreationTimestamp="2026-01-27 09:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:32.539099041 +0000 UTC m=+1136.830193912" watchObservedRunningTime="2026-01-27 09:12:32.55254284 +0000 UTC m=+1136.843637681" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.645422 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.646656 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.764159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.764260 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:12:32 crc kubenswrapper[4985]: I0127 09:12:32.842945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:12:33 crc kubenswrapper[4985]: I0127 09:12:33.020927 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:12:33 crc kubenswrapper[4985]: I0127 09:12:33.538200 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" containerID="f2497146faac80913a880fec95fff751d9cce625b3fa3b1128ca4cefe1e6d6a8" exitCode=0 Jan 27 09:12:33 crc kubenswrapper[4985]: I0127 09:12:33.538305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tmw8" event={"ID":"c2de0653-57ca-4d6b-a8a7-10b39b9c4678","Type":"ContainerDied","Data":"f2497146faac80913a880fec95fff751d9cce625b3fa3b1128ca4cefe1e6d6a8"} Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.294763 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.294844 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.310157 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.310221 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.337252 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.389337 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.422948 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.443912 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.559048 4985 generic.go:334] "Generic (PLEG): container finished" podID="42a714c1-196c-4f83-b457-83847e9e97a6" containerID="191a820ded102b1df0ce5041493282d9c44a28b8671275e904fb00a92138d524" exitCode=0 Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.559234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pj7x6" event={"ID":"42a714c1-196c-4f83-b457-83847e9e97a6","Type":"ContainerDied","Data":"191a820ded102b1df0ce5041493282d9c44a28b8671275e904fb00a92138d524"} Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.561398 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.561424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.561439 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:34 crc kubenswrapper[4985]: I0127 09:12:34.561609 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.200788 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.222684 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.275822 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.276392 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="dnsmasq-dns" containerID="cri-o://c0fc1056214cb336c7046ec26a7500a41729c3a7889b2d8d02e295878199dea2" gracePeriod=10 Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.627244 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.627284 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.633343 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:36 crc kubenswrapper[4985]: I0127 09:12:36.633371 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:37 crc kubenswrapper[4985]: I0127 09:12:37.651173 4985 generic.go:334] "Generic (PLEG): container finished" podID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerID="c0fc1056214cb336c7046ec26a7500a41729c3a7889b2d8d02e295878199dea2" exitCode=0 Jan 27 09:12:37 crc kubenswrapper[4985]: I0127 09:12:37.651264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerDied","Data":"c0fc1056214cb336c7046ec26a7500a41729c3a7889b2d8d02e295878199dea2"} Jan 27 09:12:37 crc kubenswrapper[4985]: I0127 09:12:37.969374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:12:37 crc kubenswrapper[4985]: I0127 09:12:37.969491 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:37 crc kubenswrapper[4985]: I0127 09:12:37.981656 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:12:38 crc kubenswrapper[4985]: I0127 09:12:38.418336 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:38 crc kubenswrapper[4985]: I0127 09:12:38.418873 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:12:38 crc kubenswrapper[4985]: I0127 09:12:38.427125 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:12:38 crc kubenswrapper[4985]: I0127 09:12:38.563982 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.313670 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.315294 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tmw8" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.488269 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvdkf\" (UniqueName: \"kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.488885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.488942 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px878\" (UniqueName: \"kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878\") pod \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data\") pod \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489181 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts\") pod \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489263 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle\") pod \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489308 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys\") pod \"42a714c1-196c-4f83-b457-83847e9e97a6\" (UID: \"42a714c1-196c-4f83-b457-83847e9e97a6\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.489341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs\") pod \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\" (UID: \"c2de0653-57ca-4d6b-a8a7-10b39b9c4678\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.490323 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs" (OuterVolumeSpecName: "logs") pod "c2de0653-57ca-4d6b-a8a7-10b39b9c4678" (UID: "c2de0653-57ca-4d6b-a8a7-10b39b9c4678"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.501913 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.507436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts" (OuterVolumeSpecName: "scripts") pod "c2de0653-57ca-4d6b-a8a7-10b39b9c4678" (UID: "c2de0653-57ca-4d6b-a8a7-10b39b9c4678"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.567403 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf" (OuterVolumeSpecName: "kube-api-access-zvdkf") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "kube-api-access-zvdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.567543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878" (OuterVolumeSpecName: "kube-api-access-px878") pod "c2de0653-57ca-4d6b-a8a7-10b39b9c4678" (UID: "c2de0653-57ca-4d6b-a8a7-10b39b9c4678"). InnerVolumeSpecName "kube-api-access-px878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.569347 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data" (OuterVolumeSpecName: "config-data") pod "c2de0653-57ca-4d6b-a8a7-10b39b9c4678" (UID: "c2de0653-57ca-4d6b-a8a7-10b39b9c4678"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.584748 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts" (OuterVolumeSpecName: "scripts") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.588771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2de0653-57ca-4d6b-a8a7-10b39b9c4678" (UID: "c2de0653-57ca-4d6b-a8a7-10b39b9c4678"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593347 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px878\" (UniqueName: \"kubernetes.io/projected/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-kube-api-access-px878\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593397 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593411 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593423 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593437 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593450 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2de0653-57ca-4d6b-a8a7-10b39b9c4678-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593567 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvdkf\" (UniqueName: \"kubernetes.io/projected/42a714c1-196c-4f83-b457-83847e9e97a6-kube-api-access-zvdkf\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.593624 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.600700 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.637640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.660570 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.711357 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data" (OuterVolumeSpecName: "config-data") pod "42a714c1-196c-4f83-b457-83847e9e97a6" (UID: "42a714c1-196c-4f83-b457-83847e9e97a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.712590 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.712606 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.712614 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42a714c1-196c-4f83-b457-83847e9e97a6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.735560 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" event={"ID":"bdc4ad06-4155-49c6-b6aa-e82d8774f903","Type":"ContainerDied","Data":"0b2a180693bf5a0f7a75d207a809852cb9532425b9bbddd07ff2f95a1604328e"} Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.735612 4985 scope.go:117] "RemoveContainer" containerID="c0fc1056214cb336c7046ec26a7500a41729c3a7889b2d8d02e295878199dea2" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.735741 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-5zpnv" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.745743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tmw8" event={"ID":"c2de0653-57ca-4d6b-a8a7-10b39b9c4678","Type":"ContainerDied","Data":"e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c"} Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.745952 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f463db7e2badf323d561586ca510b2ea382438d62631ce220993a513abc69c" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.745848 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tmw8" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.761034 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pj7x6" event={"ID":"42a714c1-196c-4f83-b457-83847e9e97a6","Type":"ContainerDied","Data":"899bbb3ce6c8179b05c7d60a05ee4f18da3ec0589cbf4488bde72f13249e60bf"} Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.761083 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="899bbb3ce6c8179b05c7d60a05ee4f18da3ec0589cbf4488bde72f13249e60bf" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.761170 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pj7x6" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.789770 4985 scope.go:117] "RemoveContainer" containerID="6b60d257c1ac0842c8b872b99ee076f3c5f80d1dc4ba47b00e57eacab7d834eb" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgwgj\" (UniqueName: \"kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815200 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815258 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815388 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.815462 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config\") pod \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\" (UID: \"bdc4ad06-4155-49c6-b6aa-e82d8774f903\") " Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.833461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj" (OuterVolumeSpecName: "kube-api-access-pgwgj") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "kube-api-access-pgwgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.918033 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgwgj\" (UniqueName: \"kubernetes.io/projected/bdc4ad06-4155-49c6-b6aa-e82d8774f903-kube-api-access-pgwgj\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.963427 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.973405 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.975183 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config" (OuterVolumeSpecName: "config") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.975777 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:40 crc kubenswrapper[4985]: I0127 09:12:40.987221 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdc4ad06-4155-49c6-b6aa-e82d8774f903" (UID: "bdc4ad06-4155-49c6-b6aa-e82d8774f903"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.020145 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.020479 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.020554 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.020612 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.020668 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdc4ad06-4155-49c6-b6aa-e82d8774f903-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.076092 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.089309 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-5zpnv"] Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.456890 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-786bc44b8-jnlsn"] Jan 27 09:12:41 crc kubenswrapper[4985]: E0127 09:12:41.459215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="init" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459260 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="init" Jan 27 09:12:41 crc kubenswrapper[4985]: E0127 09:12:41.459279 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a714c1-196c-4f83-b457-83847e9e97a6" containerName="keystone-bootstrap" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459287 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a714c1-196c-4f83-b457-83847e9e97a6" containerName="keystone-bootstrap" Jan 27 09:12:41 crc kubenswrapper[4985]: E0127 09:12:41.459301 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" containerName="placement-db-sync" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459312 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" containerName="placement-db-sync" Jan 27 09:12:41 crc kubenswrapper[4985]: E0127 09:12:41.459326 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="dnsmasq-dns" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459332 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="dnsmasq-dns" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459561 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" containerName="dnsmasq-dns" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459591 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a714c1-196c-4f83-b457-83847e9e97a6" containerName="keystone-bootstrap" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.459607 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" containerName="placement-db-sync" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.460658 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.471037 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-786bc44b8-jnlsn"] Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.476113 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.476533 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.476836 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xr2ld" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.476920 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.477130 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.477229 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.535660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchfl\" (UniqueName: \"kubernetes.io/projected/230e3cc0-e86e-4443-bdd7-04b53908937e-kube-api-access-rchfl\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.535778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-config-data\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.535938 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-fernet-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.536022 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-combined-ca-bundle\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.536107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-internal-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.536284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-scripts\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.536432 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-public-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.536492 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-credential-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.575782 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.579845 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.585400 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.585725 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.586193 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.586501 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l7nsl" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.586874 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.588156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-combined-ca-bundle\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-internal-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640735 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-scripts\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640786 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-public-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640816 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-credential-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640853 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchfl\" (UniqueName: \"kubernetes.io/projected/230e3cc0-e86e-4443-bdd7-04b53908937e-kube-api-access-rchfl\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640906 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-config-data\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.640966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-fernet-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.649039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-credential-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.652800 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-internal-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.652901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-combined-ca-bundle\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.654333 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-public-tls-certs\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.656067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-fernet-keys\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.667905 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-scripts\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.670865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230e3cc0-e86e-4443-bdd7-04b53908937e-config-data\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.677276 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchfl\" (UniqueName: \"kubernetes.io/projected/230e3cc0-e86e-4443-bdd7-04b53908937e-kube-api-access-rchfl\") pod \"keystone-786bc44b8-jnlsn\" (UID: \"230e3cc0-e86e-4443-bdd7-04b53908937e\") " pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743439 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743592 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldzjv\" (UniqueName: \"kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743775 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.743831 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.771944 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pj5kt" event={"ID":"a9c4f8a3-0f30-4724-84bd-952a5d5170cb","Type":"ContainerStarted","Data":"18a9911333a575795b189bfb6d05e833f4a3e28314c0f95d0bd01906bc1e8887"} Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.773818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-95tb6" event={"ID":"31214ba8-5f89-4b54-9293-b6cd43c8cbe5","Type":"ContainerStarted","Data":"4cbb89a38bd20366f4e981e913b464ff23b597b1016961d22baad2c4ccafe017"} Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.776037 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerStarted","Data":"a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf"} Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.780339 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.806315 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pj5kt" podStartSLOduration=2.886185058 podStartE2EDuration="47.806283619s" podCreationTimestamp="2026-01-27 09:11:54 +0000 UTC" firstStartedPulling="2026-01-27 09:11:55.463654064 +0000 UTC m=+1099.754748905" lastFinishedPulling="2026-01-27 09:12:40.383752625 +0000 UTC m=+1144.674847466" observedRunningTime="2026-01-27 09:12:41.801422286 +0000 UTC m=+1146.092517147" watchObservedRunningTime="2026-01-27 09:12:41.806283619 +0000 UTC m=+1146.097378460" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847087 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzjv\" (UniqueName: \"kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847181 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847218 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847241 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.847287 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.864789 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.865155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.902186 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.902471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.906869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.908136 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.914670 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzjv\" (UniqueName: \"kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.915153 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle\") pod \"placement-878b56798-5d5wm\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:41 crc kubenswrapper[4985]: I0127 09:12:41.931425 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-95tb6" podStartSLOduration=3.420817794 podStartE2EDuration="49.93139568s" podCreationTimestamp="2026-01-27 09:11:52 +0000 UTC" firstStartedPulling="2026-01-27 09:11:53.872188843 +0000 UTC m=+1098.163283684" lastFinishedPulling="2026-01-27 09:12:40.382766729 +0000 UTC m=+1144.673861570" observedRunningTime="2026-01-27 09:12:41.834149534 +0000 UTC m=+1146.125244375" watchObservedRunningTime="2026-01-27 09:12:41.93139568 +0000 UTC m=+1146.222490521" Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.209726 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.408785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-786bc44b8-jnlsn"] Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.482004 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc4ad06-4155-49c6-b6aa-e82d8774f903" path="/var/lib/kubelet/pods/bdc4ad06-4155-49c6-b6aa-e82d8774f903/volumes" Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.650422 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.765672 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69b99cb974-fzls4" podUID="24f5c0ab-206b-4a03-9e4b-c94feff53f9e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.823795 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:12:42 crc kubenswrapper[4985]: I0127 09:12:42.841752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-786bc44b8-jnlsn" event={"ID":"230e3cc0-e86e-4443-bdd7-04b53908937e","Type":"ContainerStarted","Data":"785dae86c37170537d311d1fb80e298b468a79fedbaff8d73286dd26825a4dc9"} Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.858298 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerStarted","Data":"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9"} Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.858754 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerStarted","Data":"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b"} Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.858768 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerStarted","Data":"e0ce6034c93bb1ecc63bf1dbfd3f716d58781f5f51e2bd1349f5223d52b4075a"} Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.860107 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.860128 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.872763 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-786bc44b8-jnlsn" event={"ID":"230e3cc0-e86e-4443-bdd7-04b53908937e","Type":"ContainerStarted","Data":"9b2143c1797ea37aae8f5937222dc3e6626d273e5e7a825233fa234cced13706"} Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.872920 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.885598 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-878b56798-5d5wm" podStartSLOduration=2.885578585 podStartE2EDuration="2.885578585s" podCreationTimestamp="2026-01-27 09:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:43.881110042 +0000 UTC m=+1148.172204883" watchObservedRunningTime="2026-01-27 09:12:43.885578585 +0000 UTC m=+1148.176673446" Jan 27 09:12:43 crc kubenswrapper[4985]: I0127 09:12:43.914438 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-786bc44b8-jnlsn" podStartSLOduration=2.914419926 podStartE2EDuration="2.914419926s" podCreationTimestamp="2026-01-27 09:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:43.906162819 +0000 UTC m=+1148.197257660" watchObservedRunningTime="2026-01-27 09:12:43.914419926 +0000 UTC m=+1148.205514767" Jan 27 09:12:46 crc kubenswrapper[4985]: I0127 09:12:46.925135 4985 generic.go:334] "Generic (PLEG): container finished" podID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" containerID="4cbb89a38bd20366f4e981e913b464ff23b597b1016961d22baad2c4ccafe017" exitCode=0 Jan 27 09:12:46 crc kubenswrapper[4985]: I0127 09:12:46.925339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-95tb6" event={"ID":"31214ba8-5f89-4b54-9293-b6cd43c8cbe5","Type":"ContainerDied","Data":"4cbb89a38bd20366f4e981e913b464ff23b597b1016961d22baad2c4ccafe017"} Jan 27 09:12:47 crc kubenswrapper[4985]: I0127 09:12:47.941982 4985 generic.go:334] "Generic (PLEG): container finished" podID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" containerID="18a9911333a575795b189bfb6d05e833f4a3e28314c0f95d0bd01906bc1e8887" exitCode=0 Jan 27 09:12:47 crc kubenswrapper[4985]: I0127 09:12:47.942237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pj5kt" event={"ID":"a9c4f8a3-0f30-4724-84bd-952a5d5170cb","Type":"ContainerDied","Data":"18a9911333a575795b189bfb6d05e833f4a3e28314c0f95d0bd01906bc1e8887"} Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.886419 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-95tb6" Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.891003 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.974587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pj5kt" event={"ID":"a9c4f8a3-0f30-4724-84bd-952a5d5170cb","Type":"ContainerDied","Data":"b6e69775d16d19939bd39a83b98223d080b2c4d88aeec2871d69daae88958025"} Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.974650 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e69775d16d19939bd39a83b98223d080b2c4d88aeec2871d69daae88958025" Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.974777 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pj5kt" Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.975917 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-95tb6" event={"ID":"31214ba8-5f89-4b54-9293-b6cd43c8cbe5","Type":"ContainerDied","Data":"18165cbf4587c008a4763a3d6053a2ca7948d9ce4b44869703ea8cca85469b5f"} Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.975949 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18165cbf4587c008a4763a3d6053a2ca7948d9ce4b44869703ea8cca85469b5f" Jan 27 09:12:49 crc kubenswrapper[4985]: I0127 09:12:49.976022 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-95tb6" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055472 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055564 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055724 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data\") pod \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055747 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2t2l\" (UniqueName: \"kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l\") pod \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\" (UID: \"a9c4f8a3-0f30-4724-84bd-952a5d5170cb\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055773 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z46sr\" (UniqueName: \"kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr\") pod \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.055811 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle\") pod \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\" (UID: \"31214ba8-5f89-4b54-9293-b6cd43c8cbe5\") " Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.062086 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr" (OuterVolumeSpecName: "kube-api-access-z46sr") pod "31214ba8-5f89-4b54-9293-b6cd43c8cbe5" (UID: "31214ba8-5f89-4b54-9293-b6cd43c8cbe5"). InnerVolumeSpecName "kube-api-access-z46sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.063018 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.067664 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.067713 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts" (OuterVolumeSpecName: "scripts") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.067715 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l" (OuterVolumeSpecName: "kube-api-access-m2t2l") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "kube-api-access-m2t2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.067985 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31214ba8-5f89-4b54-9293-b6cd43c8cbe5" (UID: "31214ba8-5f89-4b54-9293-b6cd43c8cbe5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.115808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.128082 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data" (OuterVolumeSpecName: "config-data") pod "a9c4f8a3-0f30-4724-84bd-952a5d5170cb" (UID: "a9c4f8a3-0f30-4724-84bd-952a5d5170cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.138862 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31214ba8-5f89-4b54-9293-b6cd43c8cbe5" (UID: "31214ba8-5f89-4b54-9293-b6cd43c8cbe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159144 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159173 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2t2l\" (UniqueName: \"kubernetes.io/projected/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-kube-api-access-m2t2l\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159186 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z46sr\" (UniqueName: \"kubernetes.io/projected/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-kube-api-access-z46sr\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159194 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31214ba8-5f89-4b54-9293-b6cd43c8cbe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159203 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159212 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159220 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159229 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.159236 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c4f8a3-0f30-4724-84bd-952a5d5170cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.347023 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:50 crc kubenswrapper[4985]: E0127 09:12:50.347408 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" containerName="cinder-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.347425 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" containerName="cinder-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: E0127 09:12:50.347441 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" containerName="barbican-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.347447 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" containerName="barbican-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.347628 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" containerName="cinder-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.347662 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" containerName="barbican-db-sync" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.358934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.383588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.405076 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.407274 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.411265 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6wfrq" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.411476 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.411677 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.416190 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.447705 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464329 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkxc4\" (UniqueName: \"kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.464398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkxc4\" (UniqueName: \"kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrcrr\" (UniqueName: \"kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566351 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566454 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.566531 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.569245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.570774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.571120 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.571194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.571446 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.585366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkxc4\" (UniqueName: \"kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4\") pod \"dnsmasq-dns-6f6d6ddd89-ln65x\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.643864 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.649903 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.652063 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.659127 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.672890 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.672933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.672978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.673020 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.673067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrcrr\" (UniqueName: \"kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.673124 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.674788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.678895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.684036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.687403 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.689796 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.689809 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrcrr\" (UniqueName: \"kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr\") pod \"cinder-scheduler-0\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.705183 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.748486 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtjq\" (UniqueName: \"kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778205 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.778562 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.880460 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtjq\" (UniqueName: \"kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.881712 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.882969 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.887000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.887444 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.887988 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.895384 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.903635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtjq\" (UniqueName: \"kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq\") pod \"cinder-api-0\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " pod="openstack/cinder-api-0" Jan 27 09:12:50 crc kubenswrapper[4985]: I0127 09:12:50.975479 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.284724 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.286269 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.288915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.289126 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.289318 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hjfvv" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.316749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.322887 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.336063 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.341299 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.393696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.393793 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.393870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.393894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnppx\" (UniqueName: \"kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.393911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.406178 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.428605 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.492572 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504550 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504719 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtvn\" (UniqueName: \"kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504850 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnppx\" (UniqueName: \"kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.504918 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.505025 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.505051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.505209 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.525019 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.577233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.588446 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.611906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.630993 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.641475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.644394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnppx\" (UniqueName: \"kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx\") pod \"barbican-worker-5bf9c57989-7kxf6\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.698841 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.752866 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754498 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xtvn\" (UniqueName: \"kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754581 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272jr\" (UniqueName: \"kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754628 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754667 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754715 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.754918 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.755075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.755192 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.755228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.755780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.756243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.762662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.763812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.764277 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.796344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.800354 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.802364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xtvn\" (UniqueName: \"kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn\") pod \"barbican-keystone-listener-7b565c4-5dhmn\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272jr\" (UniqueName: \"kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872370 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872551 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872585 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf84f\" (UniqueName: \"kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.872702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.873318 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.873435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.874000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.874531 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.891073 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.903784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272jr\" (UniqueName: \"kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr\") pod \"dnsmasq-dns-75dbb546bf-9qv57\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.942024 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.974141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.974197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.974245 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.974280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.974297 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf84f\" (UniqueName: \"kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.975661 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.980452 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.981590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.986304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.987094 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:51 crc kubenswrapper[4985]: I0127 09:12:51.998243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf84f\" (UniqueName: \"kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f\") pod \"barbican-api-774ff5bf6d-xl8xr\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.074770 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.154679 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.259728 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:12:52 crc kubenswrapper[4985]: E0127 09:12:52.299567 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.339483 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.395463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.638403 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.653838 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.753835 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.774122 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69b99cb974-fzls4" podUID="24f5c0ab-206b-4a03-9e4b-c94feff53f9e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.842886 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:12:52 crc kubenswrapper[4985]: I0127 09:12:52.975213 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.044709 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerStarted","Data":"982ff01c68f332ecc4d1ae9e811b55f90dc92f635e1ae47abe6119e93f41e269"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.051299 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerStarted","Data":"c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.051469 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="ceilometer-notification-agent" containerID="cri-o://23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f" gracePeriod=30 Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.051657 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.051672 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="proxy-httpd" containerID="cri-o://c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35" gracePeriod=30 Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.051744 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="sg-core" containerID="cri-o://a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf" gracePeriod=30 Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.065473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerStarted","Data":"9cde7cfa6efaa8703f76824b483ce813cbebf258f3881c79c3ad1d2ed4098215"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.067536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" event={"ID":"02c25b4e-dee3-4466-9d56-f74c18a36ba5","Type":"ContainerStarted","Data":"f246a351c83b7ea42022ebb2c49acea804af633b425a03605c245bd6f7a0ad95"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.068832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerStarted","Data":"5c7e3d85a835d0eb6ab730e4a173861d74c25e199c5d84677297ece0089733a2"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.071192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerStarted","Data":"9c265a71b67177e5b775cd47ba1cfd9458374b2595ed3a62e93b7eaa104091a4"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.079319 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" event={"ID":"87d01e17-30f4-4147-979f-39e60ed95e9a","Type":"ContainerStarted","Data":"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.079357 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" event={"ID":"87d01e17-30f4-4147-979f-39e60ed95e9a","Type":"ContainerStarted","Data":"d10c65e9a302b146436d57dbf3774d189a911cd65d94ad50b7dadc94bab50ed9"} Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.129675 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.469581 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546279 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkxc4\" (UniqueName: \"kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546335 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.546557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb\") pod \"87d01e17-30f4-4147-979f-39e60ed95e9a\" (UID: \"87d01e17-30f4-4147-979f-39e60ed95e9a\") " Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.585289 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4" (OuterVolumeSpecName: "kube-api-access-dkxc4") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "kube-api-access-dkxc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.665025 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkxc4\" (UniqueName: \"kubernetes.io/projected/87d01e17-30f4-4147-979f-39e60ed95e9a-kube-api-access-dkxc4\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.666384 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.674550 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config" (OuterVolumeSpecName: "config") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.675184 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.688924 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.691354 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87d01e17-30f4-4147-979f-39e60ed95e9a" (UID: "87d01e17-30f4-4147-979f-39e60ed95e9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.768128 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.768330 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.768393 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.768447 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:53 crc kubenswrapper[4985]: I0127 09:12:53.768501 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d01e17-30f4-4147-979f-39e60ed95e9a-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.104761 4985 generic.go:334] "Generic (PLEG): container finished" podID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerID="5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5" exitCode=0 Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.104829 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" event={"ID":"02c25b4e-dee3-4466-9d56-f74c18a36ba5","Type":"ContainerDied","Data":"5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.126672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerStarted","Data":"a5910ea74453ddb6b86f9dc8836fd2ef32725d4420c4401408eb9094e8854e83"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.126719 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerStarted","Data":"d2ecc272d43218e713ed4260e7f22b752e76b206f1cadb252aef3eb9db39fb7c"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.126729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerStarted","Data":"b1923deb801a24310843e58fe728b4981cc636c5d4527f51d128866c2117f1a7"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.127780 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.127805 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.153895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerStarted","Data":"5ceefe37ab151df25a5444acedc002b6955d7c6aec51b2bcb96ed2bf89041d47"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.162399 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-774ff5bf6d-xl8xr" podStartSLOduration=3.162380623 podStartE2EDuration="3.162380623s" podCreationTimestamp="2026-01-27 09:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:54.15425216 +0000 UTC m=+1158.445347021" watchObservedRunningTime="2026-01-27 09:12:54.162380623 +0000 UTC m=+1158.453475464" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.171494 4985 generic.go:334] "Generic (PLEG): container finished" podID="87d01e17-30f4-4147-979f-39e60ed95e9a" containerID="b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77" exitCode=0 Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.171570 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" event={"ID":"87d01e17-30f4-4147-979f-39e60ed95e9a","Type":"ContainerDied","Data":"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.171592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" event={"ID":"87d01e17-30f4-4147-979f-39e60ed95e9a","Type":"ContainerDied","Data":"d10c65e9a302b146436d57dbf3774d189a911cd65d94ad50b7dadc94bab50ed9"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.171620 4985 scope.go:117] "RemoveContainer" containerID="b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.171723 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6d6ddd89-ln65x" Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.190398 4985 generic.go:334] "Generic (PLEG): container finished" podID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerID="c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35" exitCode=0 Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.190648 4985 generic.go:334] "Generic (PLEG): container finished" podID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerID="a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf" exitCode=2 Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.190718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerDied","Data":"c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.190788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerDied","Data":"a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf"} Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.349874 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.368883 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6d6ddd89-ln65x"] Jan 27 09:12:54 crc kubenswrapper[4985]: I0127 09:12:54.470813 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d01e17-30f4-4147-979f-39e60ed95e9a" path="/var/lib/kubelet/pods/87d01e17-30f4-4147-979f-39e60ed95e9a/volumes" Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.203766 4985 scope.go:117] "RemoveContainer" containerID="b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77" Jan 27 09:12:55 crc kubenswrapper[4985]: E0127 09:12:55.236719 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77\": container with ID starting with b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77 not found: ID does not exist" containerID="b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77" Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.236778 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77"} err="failed to get container status \"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77\": rpc error: code = NotFound desc = could not find container \"b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77\": container with ID starting with b92d24c2703963989ae56d2139218e84e3344d1f849633a21fc8daced0c07d77 not found: ID does not exist" Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.260052 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerStarted","Data":"db9643d6852ffad7ec02f5ef9c00d9439d1ef26977ddddfe280f5a11050f84ae"} Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.260794 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api-log" containerID="cri-o://5ceefe37ab151df25a5444acedc002b6955d7c6aec51b2bcb96ed2bf89041d47" gracePeriod=30 Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.260973 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.261644 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api" containerID="cri-o://db9643d6852ffad7ec02f5ef9c00d9439d1ef26977ddddfe280f5a11050f84ae" gracePeriod=30 Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.278185 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerStarted","Data":"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2"} Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.294980 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.294955984 podStartE2EDuration="5.294955984s" podCreationTimestamp="2026-01-27 09:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:55.284331883 +0000 UTC m=+1159.575426724" watchObservedRunningTime="2026-01-27 09:12:55.294955984 +0000 UTC m=+1159.586050825" Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.307087 4985 generic.go:334] "Generic (PLEG): container finished" podID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerID="23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f" exitCode=0 Jan 27 09:12:55 crc kubenswrapper[4985]: I0127 09:12:55.307158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerDied","Data":"23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f"} Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.319626 4985 generic.go:334] "Generic (PLEG): container finished" podID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerID="db9643d6852ffad7ec02f5ef9c00d9439d1ef26977ddddfe280f5a11050f84ae" exitCode=0 Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.320002 4985 generic.go:334] "Generic (PLEG): container finished" podID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerID="5ceefe37ab151df25a5444acedc002b6955d7c6aec51b2bcb96ed2bf89041d47" exitCode=143 Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.319703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerDied","Data":"db9643d6852ffad7ec02f5ef9c00d9439d1ef26977ddddfe280f5a11050f84ae"} Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.320042 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerDied","Data":"5ceefe37ab151df25a5444acedc002b6955d7c6aec51b2bcb96ed2bf89041d47"} Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.422238 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.650226 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:12:56 crc kubenswrapper[4985]: E0127 09:12:56.650647 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d01e17-30f4-4147-979f-39e60ed95e9a" containerName="init" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.650664 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d01e17-30f4-4147-979f-39e60ed95e9a" containerName="init" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.650829 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d01e17-30f4-4147-979f-39e60ed95e9a" containerName="init" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.651715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.658556 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.659063 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.684219 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgv9v\" (UniqueName: \"kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740785 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740807 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.740894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.817916 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.818153 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-594666745c-h8zcv" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-api" containerID="cri-o://cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209" gracePeriod=30 Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.818268 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-594666745c-h8zcv" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" containerID="cri-o://1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec" gracePeriod=30 Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.847904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.847952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.847974 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.847991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.848064 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.848094 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.848142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgv9v\" (UniqueName: \"kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.851696 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.869327 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.873138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.879917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.879960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.893659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.894122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.920584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgv9v\" (UniqueName: \"kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.923085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle\") pod \"barbican-api-d4789b966-88v9q\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.966287 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:12:56 crc kubenswrapper[4985]: I0127 09:12:56.996744 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055764 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055821 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9x95\" (UniqueName: \"kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.055857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158272 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158309 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9x95\" (UniqueName: \"kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.158406 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.165445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.166928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.169904 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.170641 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.171047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.177431 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9x95\" (UniqueName: \"kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.181012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs\") pod \"neutron-7645cd55cc-6b9mt\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.231588 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-594666745c-h8zcv" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": read tcp 10.217.0.2:34588->10.217.0.156:9696: read: connection reset by peer" Jan 27 09:12:57 crc kubenswrapper[4985]: I0127 09:12:57.284626 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.235766 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:12:58 crc kubenswrapper[4985]: W0127 09:12:58.347764 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87d01e17_30f4_4147_979f_39e60ed95e9a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87d01e17_30f4_4147_979f_39e60ed95e9a.slice: no such file or directory Jan 27 09:12:58 crc kubenswrapper[4985]: W0127 09:12:58.373723 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-conmon-c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-conmon-c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35.scope: no such file or directory Jan 27 09:12:58 crc kubenswrapper[4985]: W0127 09:12:58.373770 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35.scope: no such file or directory Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389308 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv7fh\" (UniqueName: \"kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.389431 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd\") pod \"61437724-d73d-4fe5-afbc-b4994d1eda63\" (UID: \"61437724-d73d-4fe5-afbc-b4994d1eda63\") " Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.392139 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.399002 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.401366 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts" (OuterVolumeSpecName: "scripts") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.403736 4985 generic.go:334] "Generic (PLEG): container finished" podID="c1a55e00-a92c-468e-b440-72254c05314e" containerID="d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9" exitCode=137 Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.403818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerDied","Data":"d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.416819 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh" (OuterVolumeSpecName: "kube-api-access-zv7fh") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "kube-api-access-zv7fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.419295 4985 generic.go:334] "Generic (PLEG): container finished" podID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerID="a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e" exitCode=137 Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.419363 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerDied","Data":"a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.431047 4985 generic.go:334] "Generic (PLEG): container finished" podID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerID="1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec" exitCode=0 Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.431140 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerDied","Data":"1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.460382 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.464458 4985 generic.go:334] "Generic (PLEG): container finished" podID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerID="49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0" exitCode=137 Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.464485 4985 generic.go:334] "Generic (PLEG): container finished" podID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerID="910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9" exitCode=137 Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.492195 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.492243 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv7fh\" (UniqueName: \"kubernetes.io/projected/61437724-d73d-4fe5-afbc-b4994d1eda63-kube-api-access-zv7fh\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.492256 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.492267 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61437724-d73d-4fe5-afbc-b4994d1eda63-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.673855 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.696492 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.713780 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.731719 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data" (OuterVolumeSpecName: "config-data") pod "61437724-d73d-4fe5-afbc-b4994d1eda63" (UID: "61437724-d73d-4fe5-afbc-b4994d1eda63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.802003 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.802035 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61437724-d73d-4fe5-afbc-b4994d1eda63-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.903801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61437724-d73d-4fe5-afbc-b4994d1eda63","Type":"ContainerDied","Data":"41812e8b7a47e2f63883954c7a10bd1bf6e1122144e0c15bbf3a4d435bfaba88"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.903860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerDied","Data":"49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.903877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerDied","Data":"910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9"} Jan 27 09:12:58 crc kubenswrapper[4985]: I0127 09:12:58.904966 4985 scope.go:117] "RemoveContainer" containerID="c7b2e4dfca319edfbad0e43a2d1f5ed3503e24af6027d1a04b9e68d51119af35" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.051285 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-594666745c-h8zcv" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.332244 4985 scope.go:117] "RemoveContainer" containerID="a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.356743 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:12:59 crc kubenswrapper[4985]: E0127 09:12:59.384450 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78dc6815_3202_4aea_99b0_905363e0ef1e.slice/crio-conmon-a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f1ccc0_f3da_4c1b_b0cd_b0c3f1b4f363.slice/crio-conmon-49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a55e00_a92c_468e_b440_72254c05314e.slice/crio-conmon-d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f1ccc0_f3da_4c1b_b0cd_b0c3f1b4f363.slice/crio-49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a55e00_a92c_468e_b440_72254c05314e.slice/crio-caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-conmon-23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f1ccc0_f3da_4c1b_b0cd_b0c3f1b4f363.slice/crio-conmon-910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78dc6815_3202_4aea_99b0_905363e0ef1e.slice/crio-a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-conmon-a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice/crio-a3f073b553dbb350ad0bb13a1a8e9a64bd892b0e20d96245bb21f9941233a8bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78dc6815_3202_4aea_99b0_905363e0ef1e.slice/crio-edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a55e00_a92c_468e_b440_72254c05314e.slice/crio-conmon-caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f1ccc0_f3da_4c1b_b0cd_b0c3f1b4f363.slice/crio-910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe6009e_a66b_4082_b535_ec263c9e3d1a.slice/crio-conmon-1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78dc6815_3202_4aea_99b0_905363e0ef1e.slice/crio-conmon-edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe6009e_a66b_4082_b535_ec263c9e3d1a.slice/crio-conmon-cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.396950 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.518864 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-779ccf4965-4dzg4" event={"ID":"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363","Type":"ContainerDied","Data":"b3e1976e14cbb2f826cafadd1ceafb3a0e01448c55b2fa88a4ed46415c88cb55"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.518932 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e1976e14cbb2f826cafadd1ceafb3a0e01448c55b2fa88a4ed46415c88cb55" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.519643 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.530817 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerDied","Data":"caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.530756 4985 generic.go:334] "Generic (PLEG): container finished" podID="c1a55e00-a92c-468e-b440-72254c05314e" containerID="caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c" exitCode=137 Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.531191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8bc58698f-rrrdv" event={"ID":"c1a55e00-a92c-468e-b440-72254c05314e","Type":"ContainerDied","Data":"c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.531232 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c41798b5561c82966c1a38d0576c61eef7bc32a921247e3af44181a7b53e0653" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.538573 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.554660 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.554829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.554894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.555061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.555177 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.555251 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrtjq\" (UniqueName: \"kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.555344 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.555369 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data\") pod \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\" (UID: \"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.560283 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.571838 4985 generic.go:334] "Generic (PLEG): container finished" podID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerID="edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd" exitCode=137 Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.571924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerDied","Data":"edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.571950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f8b4b57c-5gfx6" event={"ID":"78dc6815-3202-4aea-99b0-905363e0ef1e","Type":"ContainerDied","Data":"1bbe3941aeed534fc6431db008d41f0384b3e672a4250415d13b174ae0445c78"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.571961 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbe3941aeed534fc6431db008d41f0384b3e672a4250415d13b174ae0445c78" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.573506 4985 scope.go:117] "RemoveContainer" containerID="23a65461941d887d16d18359c56276c760d24c9b64db33e840172ef73ae0062f" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.573589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs" (OuterVolumeSpecName: "logs") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.584078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143","Type":"ContainerDied","Data":"5c7e3d85a835d0eb6ab730e4a173861d74c25e199c5d84677297ece0089733a2"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.584209 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.591247 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.599569 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.599788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq" (OuterVolumeSpecName: "kube-api-access-vrtjq") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "kube-api-access-vrtjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.599893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts" (OuterVolumeSpecName: "scripts") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.601294 4985 generic.go:334] "Generic (PLEG): container finished" podID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerID="cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209" exitCode=0 Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.601366 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerDied","Data":"cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209"} Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.608729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerStarted","Data":"6d1d1e0016dacb548c92a954f746ccc28b374a4a3ed34e3fa3ce1e58c3230d52"} Jan 27 09:12:59 crc kubenswrapper[4985]: W0127 09:12:59.644989 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0982e77_fbf8_4db6_a5b4_359ec47691b4.slice/crio-881926c884e4b59a8470adc6f282fe3b29f997f0e06358519d2b0f597fa15f12 WatchSource:0}: Error finding container 881926c884e4b59a8470adc6f282fe3b29f997f0e06358519d2b0f597fa15f12: Status 404 returned error can't find the container with id 881926c884e4b59a8470adc6f282fe3b29f997f0e06358519d2b0f597fa15f12 Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.655977 4985 scope.go:117] "RemoveContainer" containerID="db9643d6852ffad7ec02f5ef9c00d9439d1ef26977ddddfe280f5a11050f84ae" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656561 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sctm\" (UniqueName: \"kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm\") pod \"c1a55e00-a92c-468e-b440-72254c05314e\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656664 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts\") pod \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data\") pod \"c1a55e00-a92c-468e-b440-72254c05314e\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts\") pod \"c1a55e00-a92c-468e-b440-72254c05314e\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656867 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data\") pod \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key\") pod \"c1a55e00-a92c-468e-b440-72254c05314e\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs\") pod \"c1a55e00-a92c-468e-b440-72254c05314e\" (UID: \"c1a55e00-a92c-468e-b440-72254c05314e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656968 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs\") pod \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.656997 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key\") pod \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657014 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdb78\" (UniqueName: \"kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78\") pod \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\" (UID: \"14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657409 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657427 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657435 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657446 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.657455 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrtjq\" (UniqueName: \"kubernetes.io/projected/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-kube-api-access-vrtjq\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.658589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs" (OuterVolumeSpecName: "logs") pod "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" (UID: "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.658647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs" (OuterVolumeSpecName: "logs") pod "c1a55e00-a92c-468e-b440-72254c05314e" (UID: "c1a55e00-a92c-468e-b440-72254c05314e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.703622 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1a55e00-a92c-468e-b440-72254c05314e" (UID: "c1a55e00-a92c-468e-b440-72254c05314e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.703855 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78" (OuterVolumeSpecName: "kube-api-access-sdb78") pod "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" (UID: "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363"). InnerVolumeSpecName "kube-api-access-sdb78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.716886 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm" (OuterVolumeSpecName: "kube-api-access-6sctm") pod "c1a55e00-a92c-468e-b440-72254c05314e" (UID: "c1a55e00-a92c-468e-b440-72254c05314e"). InnerVolumeSpecName "kube-api-access-6sctm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.717154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" (UID: "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.758262 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz6tf\" (UniqueName: \"kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf\") pod \"78dc6815-3202-4aea-99b0-905363e0ef1e\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.758315 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs\") pod \"78dc6815-3202-4aea-99b0-905363e0ef1e\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.758500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts\") pod \"78dc6815-3202-4aea-99b0-905363e0ef1e\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.758588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data\") pod \"78dc6815-3202-4aea-99b0-905363e0ef1e\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.758621 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key\") pod \"78dc6815-3202-4aea-99b0-905363e0ef1e\" (UID: \"78dc6815-3202-4aea-99b0-905363e0ef1e\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759097 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a55e00-a92c-468e-b440-72254c05314e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759112 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759122 4985 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759132 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdb78\" (UniqueName: \"kubernetes.io/projected/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-kube-api-access-sdb78\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759141 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sctm\" (UniqueName: \"kubernetes.io/projected/c1a55e00-a92c-468e-b440-72254c05314e-kube-api-access-6sctm\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759150 4985 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1a55e00-a92c-468e-b440-72254c05314e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.759470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs" (OuterVolumeSpecName: "logs") pod "78dc6815-3202-4aea-99b0-905363e0ef1e" (UID: "78dc6815-3202-4aea-99b0-905363e0ef1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.775587 4985 scope.go:117] "RemoveContainer" containerID="5ceefe37ab151df25a5444acedc002b6955d7c6aec51b2bcb96ed2bf89041d47" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.790697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "78dc6815-3202-4aea-99b0-905363e0ef1e" (UID: "78dc6815-3202-4aea-99b0-905363e0ef1e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.790905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf" (OuterVolumeSpecName: "kube-api-access-qz6tf") pod "78dc6815-3202-4aea-99b0-905363e0ef1e" (UID: "78dc6815-3202-4aea-99b0-905363e0ef1e"). InnerVolumeSpecName "kube-api-access-qz6tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.855966 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.860993 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz6tf\" (UniqueName: \"kubernetes.io/projected/78dc6815-3202-4aea-99b0-905363e0ef1e-kube-api-access-qz6tf\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.861033 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78dc6815-3202-4aea-99b0-905363e0ef1e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.861044 4985 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78dc6815-3202-4aea-99b0-905363e0ef1e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963364 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ds2h\" (UniqueName: \"kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963502 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.963752 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config\") pod \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\" (UID: \"fbe6009e-a66b-4082-b535-ec263c9e3d1a\") " Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.990913 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h" (OuterVolumeSpecName: "kube-api-access-4ds2h") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "kube-api-access-4ds2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:59 crc kubenswrapper[4985]: I0127 09:12:59.991051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.066559 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ds2h\" (UniqueName: \"kubernetes.io/projected/fbe6009e-a66b-4082-b535-ec263c9e3d1a-kube-api-access-4ds2h\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.066591 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.070179 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data" (OuterVolumeSpecName: "config-data") pod "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" (UID: "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.089216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.091771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts" (OuterVolumeSpecName: "scripts") pod "c1a55e00-a92c-468e-b440-72254c05314e" (UID: "c1a55e00-a92c-468e-b440-72254c05314e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.095616 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts" (OuterVolumeSpecName: "scripts") pod "78dc6815-3202-4aea-99b0-905363e0ef1e" (UID: "78dc6815-3202-4aea-99b0-905363e0ef1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.097148 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts" (OuterVolumeSpecName: "scripts") pod "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" (UID: "14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.102386 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data" (OuterVolumeSpecName: "config-data") pod "c1a55e00-a92c-468e-b440-72254c05314e" (UID: "c1a55e00-a92c-468e-b440-72254c05314e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.108147 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data" (OuterVolumeSpecName: "config-data") pod "78dc6815-3202-4aea-99b0-905363e0ef1e" (UID: "78dc6815-3202-4aea-99b0-905363e0ef1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.132701 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.137687 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.153805 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data" (OuterVolumeSpecName: "config-data") pod "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" (UID: "c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.167733 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.169745 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170666 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170681 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170693 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170707 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78dc6815-3202-4aea-99b0-905363e0ef1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170717 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170726 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170736 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a55e00-a92c-468e-b440-72254c05314e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170746 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.170757 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.181192 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config" (OuterVolumeSpecName: "config") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.197294 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fbe6009e-a66b-4082-b535-ec263c9e3d1a" (UID: "fbe6009e-a66b-4082-b535-ec263c9e3d1a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.272350 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.272388 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.272398 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbe6009e-a66b-4082-b535-ec263c9e3d1a-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.355567 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.365172 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.392956 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393305 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393321 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393334 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393342 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api-log" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393353 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="sg-core" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393361 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="sg-core" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393371 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393377 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393390 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393397 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393406 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="ceilometer-notification-agent" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393411 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="ceilometer-notification-agent" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="proxy-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393425 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="proxy-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393437 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393442 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393452 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393460 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393467 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-api" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393472 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-api" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393486 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393491 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393501 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393523 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: E0127 09:13:00.393537 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393543 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393707 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-api" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393721 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" containerName="neutron-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393731 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393762 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393770 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" containerName="cinder-api" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393778 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="ceilometer-notification-agent" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="sg-core" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393799 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393811 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a55e00-a92c-468e-b440-72254c05314e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393819 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" containerName="proxy-httpd" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393826 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393836 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" containerName="horizon" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.393846 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" containerName="horizon-log" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.404882 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.406767 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.414068 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.414352 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.414525 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.468415 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143" path="/var/lib/kubelet/pods/c5c1c7fe-5c2e-4ee6-a5e9-f38e2c05d143/volumes" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.576718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.576776 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.576833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577001 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577114 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577151 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntkb\" (UniqueName: \"kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.577306 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.642178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" event={"ID":"02c25b4e-dee3-4466-9d56-f74c18a36ba5","Type":"ContainerStarted","Data":"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.642998 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.647247 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerStarted","Data":"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.649461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerStarted","Data":"856ba4aeedb6e76a2132db42cdbb6a15e54145f06dd72ef137282857e68f28d3"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.649610 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerStarted","Data":"98b4020986d2865733874c979c1de64770b25c9e9ac453f22ec06ab8b8c43c9e"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.650309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.650549 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.653096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerStarted","Data":"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.657001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerStarted","Data":"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.661382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerStarted","Data":"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.661428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerStarted","Data":"881926c884e4b59a8470adc6f282fe3b29f997f0e06358519d2b0f597fa15f12"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.670104 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" podStartSLOduration=9.670083081 podStartE2EDuration="9.670083081s" podCreationTimestamp="2026-01-27 09:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:00.659666695 +0000 UTC m=+1164.950761536" watchObservedRunningTime="2026-01-27 09:13:00.670083081 +0000 UTC m=+1164.961177922" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.671983 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerStarted","Data":"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.672043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerStarted","Data":"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680308 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680354 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680372 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680454 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.680497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.681422 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.681608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.681643 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.682473 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f8b4b57c-5gfx6" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.683066 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594666745c-h8zcv" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.683436 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594666745c-h8zcv" event={"ID":"fbe6009e-a66b-4082-b535-ec263c9e3d1a","Type":"ContainerDied","Data":"394ae4bea110de26a400aae6cff692aebdf9a580a6d0be99b158d72b3a809e63"} Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.683476 4985 scope.go:117] "RemoveContainer" containerID="1f9abe6c7d00c64798e599037d08ec3a3c7b58e57504d0941ea25190bdda50ec" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.683585 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-779ccf4965-4dzg4" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.684340 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.685343 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8bc58698f-rrrdv" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.685471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.686269 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntkb\" (UniqueName: \"kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.699238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.700115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.702182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.704674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.705746 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntkb\" (UniqueName: \"kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb\") pod \"cinder-api-0\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.729801 4985 scope.go:117] "RemoveContainer" containerID="cb462cfeaf054d29c4bb94b99be2d82b0753232e9c71dcb487a4b09f42c87209" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.737558 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d4789b966-88v9q" podStartSLOduration=4.737535631 podStartE2EDuration="4.737535631s" podCreationTimestamp="2026-01-27 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:00.699627961 +0000 UTC m=+1164.990722802" watchObservedRunningTime="2026-01-27 09:13:00.737535631 +0000 UTC m=+1165.028630472" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.737878 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.751048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.752792 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.160:8080/\": dial tcp 10.217.0.160:8080: connect: connection refused" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.754967 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" podStartSLOduration=4.245618381 podStartE2EDuration="9.752643525s" podCreationTimestamp="2026-01-27 09:12:51 +0000 UTC" firstStartedPulling="2026-01-27 09:12:52.858623996 +0000 UTC m=+1157.149718837" lastFinishedPulling="2026-01-27 09:12:58.36564914 +0000 UTC m=+1162.656743981" observedRunningTime="2026-01-27 09:13:00.71419022 +0000 UTC m=+1165.005285061" watchObservedRunningTime="2026-01-27 09:13:00.752643525 +0000 UTC m=+1165.043738366" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.766829 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.631360863 podStartE2EDuration="10.766810413s" podCreationTimestamp="2026-01-27 09:12:50 +0000 UTC" firstStartedPulling="2026-01-27 09:12:52.272774209 +0000 UTC m=+1156.563869050" lastFinishedPulling="2026-01-27 09:12:53.408223759 +0000 UTC m=+1157.699318600" observedRunningTime="2026-01-27 09:13:00.740704347 +0000 UTC m=+1165.031799188" watchObservedRunningTime="2026-01-27 09:13:00.766810413 +0000 UTC m=+1165.057905254" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.786215 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.795872 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77f8b4b57c-5gfx6"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.808152 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.829774 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-594666745c-h8zcv"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.830009 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bf9c57989-7kxf6" podStartSLOduration=4.166194932 podStartE2EDuration="9.829986966s" podCreationTimestamp="2026-01-27 09:12:51 +0000 UTC" firstStartedPulling="2026-01-27 09:12:52.701069735 +0000 UTC m=+1156.992164576" lastFinishedPulling="2026-01-27 09:12:58.364861769 +0000 UTC m=+1162.655956610" observedRunningTime="2026-01-27 09:13:00.801420822 +0000 UTC m=+1165.092515663" watchObservedRunningTime="2026-01-27 09:13:00.829986966 +0000 UTC m=+1165.121081807" Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.877868 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.893987 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8bc58698f-rrrdv"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.903371 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:13:00 crc kubenswrapper[4985]: I0127 09:13:00.919653 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-779ccf4965-4dzg4"] Jan 27 09:13:01 crc kubenswrapper[4985]: I0127 09:13:01.399787 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:01 crc kubenswrapper[4985]: I0127 09:13:01.717248 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerStarted","Data":"5f5d3b8ec8f5cd5145026a0fdaa1863f7a611d61e5d88eeb6e65faf97d495136"} Jan 27 09:13:01 crc kubenswrapper[4985]: I0127 09:13:01.720126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerStarted","Data":"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb"} Jan 27 09:13:01 crc kubenswrapper[4985]: I0127 09:13:01.720598 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:13:02 crc kubenswrapper[4985]: I0127 09:13:02.472964 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363" path="/var/lib/kubelet/pods/14f1ccc0-f3da-4c1b-b0cd-b0c3f1b4f363/volumes" Jan 27 09:13:02 crc kubenswrapper[4985]: I0127 09:13:02.474245 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78dc6815-3202-4aea-99b0-905363e0ef1e" path="/var/lib/kubelet/pods/78dc6815-3202-4aea-99b0-905363e0ef1e/volumes" Jan 27 09:13:02 crc kubenswrapper[4985]: I0127 09:13:02.474930 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a55e00-a92c-468e-b440-72254c05314e" path="/var/lib/kubelet/pods/c1a55e00-a92c-468e-b440-72254c05314e/volumes" Jan 27 09:13:02 crc kubenswrapper[4985]: I0127 09:13:02.476170 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe6009e-a66b-4082-b535-ec263c9e3d1a" path="/var/lib/kubelet/pods/fbe6009e-a66b-4082-b535-ec263c9e3d1a/volumes" Jan 27 09:13:02 crc kubenswrapper[4985]: I0127 09:13:02.771936 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerStarted","Data":"3785fa4c19f792d84bdf68f8b449303cf0a06c585e4df49c6c93d0c1426c9a2f"} Jan 27 09:13:03 crc kubenswrapper[4985]: I0127 09:13:03.780113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerStarted","Data":"f25036f1c12cfbbb71d0c5c09929bd145d1768bc8cb141d760d5d6611b88d36e"} Jan 27 09:13:03 crc kubenswrapper[4985]: I0127 09:13:03.780547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 09:13:03 crc kubenswrapper[4985]: I0127 09:13:03.799182 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.799163038 podStartE2EDuration="3.799163038s" podCreationTimestamp="2026-01-27 09:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:03.796723021 +0000 UTC m=+1168.087817862" watchObservedRunningTime="2026-01-27 09:13:03.799163038 +0000 UTC m=+1168.090257879" Jan 27 09:13:03 crc kubenswrapper[4985]: I0127 09:13:03.807867 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7645cd55cc-6b9mt" podStartSLOduration=7.807845416 podStartE2EDuration="7.807845416s" podCreationTimestamp="2026-01-27 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:01.750319787 +0000 UTC m=+1166.041414628" watchObservedRunningTime="2026-01-27 09:13:03.807845416 +0000 UTC m=+1168.098940257" Jan 27 09:13:04 crc kubenswrapper[4985]: I0127 09:13:04.067307 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:13:04 crc kubenswrapper[4985]: I0127 09:13:04.117779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:13:04 crc kubenswrapper[4985]: I0127 09:13:04.927176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:13:04 crc kubenswrapper[4985]: I0127 09:13:04.985166 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:13:05 crc kubenswrapper[4985]: I0127 09:13:05.968703 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.037912 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.657720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69b99cb974-fzls4" Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.721870 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.722805 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon-log" containerID="cri-o://e16ecc5391723ec866b22379c3eff871778d1029f7535362b6bf0ab919a57d0c" gracePeriod=30 Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.722919 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" containerID="cri-o://f685f5d57bf90797e6960a0da540e2156808e3702d029e5231792e91efc492ec" gracePeriod=30 Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.737742 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.810883 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="cinder-scheduler" containerID="cri-o://653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2" gracePeriod=30 Jan 27 09:13:06 crc kubenswrapper[4985]: I0127 09:13:06.811454 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="probe" containerID="cri-o://fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc" gracePeriod=30 Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.156867 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.225089 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.225502 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-q8nbf" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="dnsmasq-dns" containerID="cri-o://abb1d2c7a620858ee2962d43c7b3c3f76d60afdf1e8c844a1749ce7974e81420" gracePeriod=10 Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.836719 4985 generic.go:334] "Generic (PLEG): container finished" podID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerID="abb1d2c7a620858ee2962d43c7b3c3f76d60afdf1e8c844a1749ce7974e81420" exitCode=0 Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.837050 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8nbf" event={"ID":"635ffd32-9e1e-48a9-8560-36e92db872ee","Type":"ContainerDied","Data":"abb1d2c7a620858ee2962d43c7b3c3f76d60afdf1e8c844a1749ce7974e81420"} Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.837078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8nbf" event={"ID":"635ffd32-9e1e-48a9-8560-36e92db872ee","Type":"ContainerDied","Data":"114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9"} Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.837089 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114492e3d73eb17545a9aed34cb3ad394dcee17203618bb4bb1dcf2bf39f57d9" Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.840175 4985 generic.go:334] "Generic (PLEG): container finished" podID="29c473e3-7062-4725-a515-928807284b8d" containerID="fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc" exitCode=0 Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.840207 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerDied","Data":"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc"} Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.857810 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955502 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6t6\" (UniqueName: \"kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955741 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955820 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.955872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc\") pod \"635ffd32-9e1e-48a9-8560-36e92db872ee\" (UID: \"635ffd32-9e1e-48a9-8560-36e92db872ee\") " Jan 27 09:13:07 crc kubenswrapper[4985]: I0127 09:13:07.984243 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6" (OuterVolumeSpecName: "kube-api-access-fp6t6") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "kube-api-access-fp6t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.032268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.043711 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config" (OuterVolumeSpecName: "config") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.052620 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.054227 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.058762 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.058810 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.058823 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp6t6\" (UniqueName: \"kubernetes.io/projected/635ffd32-9e1e-48a9-8560-36e92db872ee-kube-api-access-fp6t6\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.058835 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.058847 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.061843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "635ffd32-9e1e-48a9-8560-36e92db872ee" (UID: "635ffd32-9e1e-48a9-8560-36e92db872ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.161118 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/635ffd32-9e1e-48a9-8560-36e92db872ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.787020 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.847888 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8nbf" Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.868817 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:13:08 crc kubenswrapper[4985]: I0127 09:13:08.878995 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8nbf"] Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.121021 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.181767 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.185626 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774ff5bf6d-xl8xr" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api-log" containerID="cri-o://d2ecc272d43218e713ed4260e7f22b752e76b206f1cadb252aef3eb9db39fb7c" gracePeriod=30 Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.186234 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774ff5bf6d-xl8xr" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api" containerID="cri-o://a5910ea74453ddb6b86f9dc8836fd2ef32725d4420c4401408eb9094e8854e83" gracePeriod=30 Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.859536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerDied","Data":"d2ecc272d43218e713ed4260e7f22b752e76b206f1cadb252aef3eb9db39fb7c"} Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.859873 4985 generic.go:334] "Generic (PLEG): container finished" podID="4836a522-8ff0-48c0-837d-c1785dee8378" containerID="d2ecc272d43218e713ed4260e7f22b752e76b206f1cadb252aef3eb9db39fb7c" exitCode=143 Jan 27 09:13:09 crc kubenswrapper[4985]: I0127 09:13:09.864776 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:33588->10.217.0.149:8443: read: connection reset by peer" Jan 27 09:13:10 crc kubenswrapper[4985]: I0127 09:13:10.461712 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" path="/var/lib/kubelet/pods/635ffd32-9e1e-48a9-8560-36e92db872ee/volumes" Jan 27 09:13:10 crc kubenswrapper[4985]: I0127 09:13:10.871439 4985 generic.go:334] "Generic (PLEG): container finished" podID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerID="f685f5d57bf90797e6960a0da540e2156808e3702d029e5231792e91efc492ec" exitCode=0 Jan 27 09:13:10 crc kubenswrapper[4985]: I0127 09:13:10.871501 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerDied","Data":"f685f5d57bf90797e6960a0da540e2156808e3702d029e5231792e91efc492ec"} Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.657797 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749295 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrcrr\" (UniqueName: \"kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749348 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749374 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.749601 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id\") pod \"29c473e3-7062-4725-a515-928807284b8d\" (UID: \"29c473e3-7062-4725-a515-928807284b8d\") " Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.750738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.754380 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.755742 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts" (OuterVolumeSpecName: "scripts") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.757280 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr" (OuterVolumeSpecName: "kube-api-access-vrcrr") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "kube-api-access-vrcrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.811360 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.851986 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrcrr\" (UniqueName: \"kubernetes.io/projected/29c473e3-7062-4725-a515-928807284b8d-kube-api-access-vrcrr\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.852014 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.852025 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.852033 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.852041 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29c473e3-7062-4725-a515-928807284b8d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.859561 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data" (OuterVolumeSpecName: "config-data") pod "29c473e3-7062-4725-a515-928807284b8d" (UID: "29c473e3-7062-4725-a515-928807284b8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.882805 4985 generic.go:334] "Generic (PLEG): container finished" podID="29c473e3-7062-4725-a515-928807284b8d" containerID="653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2" exitCode=0 Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.882869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerDied","Data":"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2"} Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.882919 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29c473e3-7062-4725-a515-928807284b8d","Type":"ContainerDied","Data":"9c265a71b67177e5b775cd47ba1cfd9458374b2595ed3a62e93b7eaa104091a4"} Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.882942 4985 scope.go:117] "RemoveContainer" containerID="fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.883127 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.925677 4985 scope.go:117] "RemoveContainer" containerID="653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.933652 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.956320 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.962327 4985 scope.go:117] "RemoveContainer" containerID="fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc" Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.963338 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc\": container with ID starting with fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc not found: ID does not exist" containerID="fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.963371 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc"} err="failed to get container status \"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc\": rpc error: code = NotFound desc = could not find container \"fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc\": container with ID starting with fd6b6f8407d47a0b98ed3d3cc2e4c203b18d933a588c26242b43f27e27202fdc not found: ID does not exist" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.963392 4985 scope.go:117] "RemoveContainer" containerID="653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965090 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c473e3-7062-4725-a515-928807284b8d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965140 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.965566 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="dnsmasq-dns" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965583 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="dnsmasq-dns" Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.965611 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="init" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965618 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="init" Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.965644 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="cinder-scheduler" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965653 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="cinder-scheduler" Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.965661 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="probe" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.965667 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="probe" Jan 27 09:13:11 crc kubenswrapper[4985]: E0127 09:13:11.966222 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2\": container with ID starting with 653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2 not found: ID does not exist" containerID="653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.966288 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2"} err="failed to get container status \"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2\": rpc error: code = NotFound desc = could not find container \"653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2\": container with ID starting with 653f78ccd9185d7dbc7fea059a2bb640b42724493b0cfbb8599796f081a12cf2 not found: ID does not exist" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.967927 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="probe" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.967961 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="635ffd32-9e1e-48a9-8560-36e92db872ee" containerName="dnsmasq-dns" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.967975 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c473e3-7062-4725-a515-928807284b8d" containerName="cinder-scheduler" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.969393 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.973177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 09:13:11 crc kubenswrapper[4985]: I0127 09:13:11.982702 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066411 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh49f\" (UniqueName: \"kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066779 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.066814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh49f\" (UniqueName: \"kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168844 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.168996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.169113 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.175268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.175461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.176652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.183260 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.187995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh49f\" (UniqueName: \"kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f\") pod \"cinder-scheduler-0\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.309797 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.418650 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-774ff5bf6d-xl8xr" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:49316->10.217.0.165:9311: read: connection reset by peer" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.418691 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-774ff5bf6d-xl8xr" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:49320->10.217.0.165:9311: read: connection reset by peer" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.557676 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c473e3-7062-4725-a515-928807284b8d" path="/var/lib/kubelet/pods/29c473e3-7062-4725-a515-928807284b8d/volumes" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.659258 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.912576 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.914329 4985 generic.go:334] "Generic (PLEG): container finished" podID="4836a522-8ff0-48c0-837d-c1785dee8378" containerID="a5910ea74453ddb6b86f9dc8836fd2ef32725d4420c4401408eb9094e8854e83" exitCode=0 Jan 27 09:13:12 crc kubenswrapper[4985]: I0127 09:13:12.914541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerDied","Data":"a5910ea74453ddb6b86f9dc8836fd2ef32725d4420c4401408eb9094e8854e83"} Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.045917 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.098570 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.151818 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs\") pod \"4836a522-8ff0-48c0-837d-c1785dee8378\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.151884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf84f\" (UniqueName: \"kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f\") pod \"4836a522-8ff0-48c0-837d-c1785dee8378\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.151951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data\") pod \"4836a522-8ff0-48c0-837d-c1785dee8378\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.152084 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle\") pod \"4836a522-8ff0-48c0-837d-c1785dee8378\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.152154 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom\") pod \"4836a522-8ff0-48c0-837d-c1785dee8378\" (UID: \"4836a522-8ff0-48c0-837d-c1785dee8378\") " Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.165019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs" (OuterVolumeSpecName: "logs") pod "4836a522-8ff0-48c0-837d-c1785dee8378" (UID: "4836a522-8ff0-48c0-837d-c1785dee8378"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.171296 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f" (OuterVolumeSpecName: "kube-api-access-gf84f") pod "4836a522-8ff0-48c0-837d-c1785dee8378" (UID: "4836a522-8ff0-48c0-837d-c1785dee8378"). InnerVolumeSpecName "kube-api-access-gf84f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.182817 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4836a522-8ff0-48c0-837d-c1785dee8378" (UID: "4836a522-8ff0-48c0-837d-c1785dee8378"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.254037 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.254367 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4836a522-8ff0-48c0-837d-c1785dee8378-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.254433 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf84f\" (UniqueName: \"kubernetes.io/projected/4836a522-8ff0-48c0-837d-c1785dee8378-kube-api-access-gf84f\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.266704 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4836a522-8ff0-48c0-837d-c1785dee8378" (UID: "4836a522-8ff0-48c0-837d-c1785dee8378"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.272652 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data" (OuterVolumeSpecName: "config-data") pod "4836a522-8ff0-48c0-837d-c1785dee8378" (UID: "4836a522-8ff0-48c0-837d-c1785dee8378"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.356847 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.357097 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4836a522-8ff0-48c0-837d-c1785dee8378-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.973276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774ff5bf6d-xl8xr" event={"ID":"4836a522-8ff0-48c0-837d-c1785dee8378","Type":"ContainerDied","Data":"b1923deb801a24310843e58fe728b4981cc636c5d4527f51d128866c2117f1a7"} Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.973697 4985 scope.go:117] "RemoveContainer" containerID="a5910ea74453ddb6b86f9dc8836fd2ef32725d4420c4401408eb9094e8854e83" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.973761 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774ff5bf6d-xl8xr" Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.976723 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerStarted","Data":"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85"} Jan 27 09:13:13 crc kubenswrapper[4985]: I0127 09:13:13.976744 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerStarted","Data":"871e10fa54f855dd749d6db056c3c54c856703ea7fd9e281693b786103479604"} Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.055364 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.071913 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-774ff5bf6d-xl8xr"] Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.083430 4985 scope.go:117] "RemoveContainer" containerID="d2ecc272d43218e713ed4260e7f22b752e76b206f1cadb252aef3eb9db39fb7c" Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.107876 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.108536 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.210921 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-786bc44b8-jnlsn" Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.495154 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" path="/var/lib/kubelet/pods/4836a522-8ff0-48c0-837d-c1785dee8378/volumes" Jan 27 09:13:14 crc kubenswrapper[4985]: I0127 09:13:14.989680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerStarted","Data":"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873"} Jan 27 09:13:15 crc kubenswrapper[4985]: I0127 09:13:15.009213 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.009193759 podStartE2EDuration="4.009193759s" podCreationTimestamp="2026-01-27 09:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:15.007112922 +0000 UTC m=+1179.298207763" watchObservedRunningTime="2026-01-27 09:13:15.009193759 +0000 UTC m=+1179.300288590" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.183308 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 09:13:17 crc kubenswrapper[4985]: E0127 09:13:17.185029 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.185090 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api" Jan 27 09:13:17 crc kubenswrapper[4985]: E0127 09:13:17.185123 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api-log" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.185129 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api-log" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.185297 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.185318 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4836a522-8ff0-48c0-837d-c1785dee8378" containerName="barbican-api-log" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.185990 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.188286 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9g8qq" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.188439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.189487 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.195992 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.255759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.255847 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.255880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkbl\" (UniqueName: \"kubernetes.io/projected/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-kube-api-access-wdkbl\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.255926 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.310124 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.358338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.358392 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.358421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkbl\" (UniqueName: \"kubernetes.io/projected/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-kube-api-access-wdkbl\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.358467 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.359897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.365117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.365687 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.380419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkbl\" (UniqueName: \"kubernetes.io/projected/1a110a4f-4669-42cb-9a7a-acb80ad9c3e2-kube-api-access-wdkbl\") pod \"openstackclient\" (UID: \"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2\") " pod="openstack/openstackclient" Jan 27 09:13:17 crc kubenswrapper[4985]: I0127 09:13:17.519779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:13:18 crc kubenswrapper[4985]: I0127 09:13:18.079117 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 09:13:19 crc kubenswrapper[4985]: I0127 09:13:19.039641 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2","Type":"ContainerStarted","Data":"835bb95cfb182c3efb501addd71699ded9109a98b7eeaca04bae1d3bd5370cac"} Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.551047 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.553354 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.556472 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.560038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.561787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.565689 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.636540 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.645745 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696384 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696491 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.696844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czct\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.802392 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.802847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.802887 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czct\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.803033 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.803067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.803173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.803258 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.803377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.804304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.805133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.810089 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.810927 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.811994 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.813294 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.823883 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.825729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czct\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct\") pod \"swift-proxy-689489568f-6ggjw\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:22 crc kubenswrapper[4985]: I0127 09:13:22.888249 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.636695 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.637302 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="cinder-scheduler" containerID="cri-o://41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85" gracePeriod=30 Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.637857 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="probe" containerID="cri-o://f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873" gracePeriod=30 Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.669432 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.671011 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api-log" containerID="cri-o://3785fa4c19f792d84bdf68f8b449303cf0a06c585e4df49c6c93d0c1426c9a2f" gracePeriod=30 Jan 27 09:13:24 crc kubenswrapper[4985]: I0127 09:13:24.671394 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api" containerID="cri-o://f25036f1c12cfbbb71d0c5c09929bd145d1768bc8cb141d760d5d6611b88d36e" gracePeriod=30 Jan 27 09:13:25 crc kubenswrapper[4985]: I0127 09:13:25.114718 4985 generic.go:334] "Generic (PLEG): container finished" podID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerID="3785fa4c19f792d84bdf68f8b449303cf0a06c585e4df49c6c93d0c1426c9a2f" exitCode=143 Jan 27 09:13:25 crc kubenswrapper[4985]: I0127 09:13:25.114786 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerDied","Data":"3785fa4c19f792d84bdf68f8b449303cf0a06c585e4df49c6c93d0c1426c9a2f"} Jan 27 09:13:26 crc kubenswrapper[4985]: I0127 09:13:26.127146 4985 generic.go:334] "Generic (PLEG): container finished" podID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerID="f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873" exitCode=0 Jan 27 09:13:26 crc kubenswrapper[4985]: I0127 09:13:26.127195 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerDied","Data":"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873"} Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.180462 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xqrpv"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.182985 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.198342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xqrpv"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.314253 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngwfz\" (UniqueName: \"kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.314293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.319103 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.326532 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-l8v4l"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.328159 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.348437 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9c44-account-create-update-hwhwd"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.349976 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.356534 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.402605 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l8v4l"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.416114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngwfz\" (UniqueName: \"kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.416178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.416718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.416851 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.417095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6wz\" (UniqueName: \"kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.417198 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrjg\" (UniqueName: \"kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.418140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.419676 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9c44-account-create-update-hwhwd"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.449068 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.450101 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85bdc684db-7q85p" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-api" containerID="cri-o://d4c75075010687e0dcce9874d61f77c496648b9a79868e5303597d3792b9a8a4" gracePeriod=30 Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.450811 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85bdc684db-7q85p" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-httpd" containerID="cri-o://c8a7377bea9823b710e4fa053c3d56b96c5efbf32603a2894d0ecaaa3a7ea381" gracePeriod=30 Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.460071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngwfz\" (UniqueName: \"kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz\") pod \"nova-api-db-create-xqrpv\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.515286 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1835-account-create-update-m5p5l"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.516837 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.518642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.518738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6wz\" (UniqueName: \"kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.518785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrjg\" (UniqueName: \"kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.518857 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.519672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.520165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.523105 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.529979 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7cxvr"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.538270 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.547025 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7cxvr"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.550062 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6wz\" (UniqueName: \"kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz\") pod \"nova-api-9c44-account-create-update-hwhwd\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.561912 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1835-account-create-update-m5p5l"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.564417 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrjg\" (UniqueName: \"kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg\") pod \"nova-cell0-db-create-l8v4l\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.570389 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.621283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs22q\" (UniqueName: \"kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.621343 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.621369 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvt5\" (UniqueName: \"kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.621902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.653614 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.697390 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.706864 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ce6a-account-create-update-qc72j"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.708381 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.712374 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.720057 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ce6a-account-create-update-qc72j"] Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.723549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.723673 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs22q\" (UniqueName: \"kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.723729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.723757 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvt5\" (UniqueName: \"kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.725021 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.739929 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.745609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvt5\" (UniqueName: \"kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5\") pod \"nova-cell1-db-create-7cxvr\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.766884 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs22q\" (UniqueName: \"kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q\") pod \"nova-cell0-1835-account-create-update-m5p5l\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.825247 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.825299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw298\" (UniqueName: \"kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.859915 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:37018->10.217.0.168:8776: read: connection reset by peer" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.928176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.928251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw298\" (UniqueName: \"kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.929800 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.933829 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.944334 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:27 crc kubenswrapper[4985]: I0127 09:13:27.952425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw298\" (UniqueName: \"kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298\") pod \"nova-cell1-ce6a-account-create-update-qc72j\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:28 crc kubenswrapper[4985]: I0127 09:13:28.038362 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:28 crc kubenswrapper[4985]: I0127 09:13:28.155704 4985 generic.go:334] "Generic (PLEG): container finished" podID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerID="f25036f1c12cfbbb71d0c5c09929bd145d1768bc8cb141d760d5d6611b88d36e" exitCode=0 Jan 27 09:13:28 crc kubenswrapper[4985]: I0127 09:13:28.155831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerDied","Data":"f25036f1c12cfbbb71d0c5c09929bd145d1768bc8cb141d760d5d6611b88d36e"} Jan 27 09:13:28 crc kubenswrapper[4985]: I0127 09:13:28.159078 4985 generic.go:334] "Generic (PLEG): container finished" podID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerID="c8a7377bea9823b710e4fa053c3d56b96c5efbf32603a2894d0ecaaa3a7ea381" exitCode=0 Jan 27 09:13:28 crc kubenswrapper[4985]: I0127 09:13:28.159137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerDied","Data":"c8a7377bea9823b710e4fa053c3d56b96c5efbf32603a2894d0ecaaa3a7ea381"} Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.271116 4985 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod61437724-d73d-4fe5-afbc-b4994d1eda63"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod61437724-d73d-4fe5-afbc-b4994d1eda63] : Timed out while waiting for systemd to remove kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice" Jan 27 09:13:29 crc kubenswrapper[4985]: E0127 09:13:29.272047 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod61437724-d73d-4fe5-afbc-b4994d1eda63] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod61437724-d73d-4fe5-afbc-b4994d1eda63] : Timed out while waiting for systemd to remove kubepods-besteffort-pod61437724_d73d_4fe5_afbc_b4994d1eda63.slice" pod="openstack/ceilometer-0" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.331561 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466721 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntkb\" (UniqueName: \"kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466779 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466931 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.466952 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.467069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.467104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.467146 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs\") pod \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\" (UID: \"6e4b0e68-ecc6-41aa-975a-14094de6ae67\") " Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.468687 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.469483 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs" (OuterVolumeSpecName: "logs") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.480957 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts" (OuterVolumeSpecName: "scripts") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.480969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb" (OuterVolumeSpecName: "kube-api-access-sntkb") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "kube-api-access-sntkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.509728 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.568140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570132 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e4b0e68-ecc6-41aa-975a-14094de6ae67-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570162 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570177 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntkb\" (UniqueName: \"kubernetes.io/projected/6e4b0e68-ecc6-41aa-975a-14094de6ae67-kube-api-access-sntkb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570189 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e4b0e68-ecc6-41aa-975a-14094de6ae67-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570200 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.570208 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.583823 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.651764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.692767 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.692800 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.691838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data" (OuterVolumeSpecName: "config-data") pod "6e4b0e68-ecc6-41aa-975a-14094de6ae67" (UID: "6e4b0e68-ecc6-41aa-975a-14094de6ae67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.795730 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e4b0e68-ecc6-41aa-975a-14094de6ae67-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:29 crc kubenswrapper[4985]: I0127 09:13:29.909163 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xqrpv"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.112379 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7cxvr"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.128021 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ce6a-account-create-update-qc72j"] Jan 27 09:13:30 crc kubenswrapper[4985]: W0127 09:13:30.137497 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4581f336_5495_48a5_b6a0_d35ea0818a50.slice/crio-5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f WatchSource:0}: Error finding container 5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f: Status 404 returned error can't find the container with id 5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.141345 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9c44-account-create-update-hwhwd"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.196707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xqrpv" event={"ID":"1de7d9fd-8ea7-4a62-8325-627343d4c2b3","Type":"ContainerStarted","Data":"4e61b3db466dca5100f51e9355a8b0bfb35944f50df01d9865f2dfd2b3833268"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.203872 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a110a4f-4669-42cb-9a7a-acb80ad9c3e2","Type":"ContainerStarted","Data":"f54b42c73ada27e6e61fa8fde95974a6df7ec359a9b42d195331d304ebf0f51d"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.216546 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c44-account-create-update-hwhwd" event={"ID":"fdc13ae9-0535-44cc-832d-4c22f662cfc7","Type":"ContainerStarted","Data":"e6c231f5620ab30501d9b71078d715837105df6bca888549e5729e5b27c66fff"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.228928 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cxvr" event={"ID":"4581f336-5495-48a5-b6a0-d35ea0818a50","Type":"ContainerStarted","Data":"5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.240034 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" event={"ID":"4a211599-7c44-4939-8141-69dda5389ca7","Type":"ContainerStarted","Data":"7e1f0c455319443d0a796ba3e5118d8620a686cc58462bda773e5adcf7578dab"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.272344 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.274020 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.274099 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e4b0e68-ecc6-41aa-975a-14094de6ae67","Type":"ContainerDied","Data":"5f5d3b8ec8f5cd5145026a0fdaa1863f7a611d61e5d88eeb6e65faf97d495136"} Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.274348 4985 scope.go:117] "RemoveContainer" containerID="f25036f1c12cfbbb71d0c5c09929bd145d1768bc8cb141d760d5d6611b88d36e" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.277747 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.264141333 podStartE2EDuration="13.277716958s" podCreationTimestamp="2026-01-27 09:13:17 +0000 UTC" firstStartedPulling="2026-01-27 09:13:18.10013856 +0000 UTC m=+1182.391233391" lastFinishedPulling="2026-01-27 09:13:29.113714185 +0000 UTC m=+1193.404809016" observedRunningTime="2026-01-27 09:13:30.225876236 +0000 UTC m=+1194.516971077" watchObservedRunningTime="2026-01-27 09:13:30.277716958 +0000 UTC m=+1194.568811799" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.281588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l8v4l"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.306550 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1835-account-create-update-m5p5l"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.364328 4985 scope.go:117] "RemoveContainer" containerID="3785fa4c19f792d84bdf68f8b449303cf0a06c585e4df49c6c93d0c1426c9a2f" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.422069 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.446122 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.562662 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61437724-d73d-4fe5-afbc-b4994d1eda63" path="/var/lib/kubelet/pods/61437724-d73d-4fe5-afbc-b4994d1eda63/volumes" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.563690 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.563730 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.563750 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: E0127 09:13:30.564137 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.564158 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api" Jan 27 09:13:30 crc kubenswrapper[4985]: E0127 09:13:30.564170 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api-log" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.564178 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api-log" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.564395 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api-log" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.564417 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" containerName="cinder-api" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.566726 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.566753 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.567078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.567945 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.567849 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.568818 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.575991 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.577677 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.577746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.579209 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.582367 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627756 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-scripts\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.627967 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628009 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s825z\" (UniqueName: \"kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628087 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfc8\" (UniqueName: \"kubernetes.io/projected/b8bda802-a296-4c69-b1ee-07a238912c81-kube-api-access-xkfc8\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628116 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8bda802-a296-4c69-b1ee-07a238912c81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628137 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628163 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bda802-a296-4c69-b1ee-07a238912c81-logs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.628465 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.741749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.743831 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.744194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.748993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749085 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-scripts\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s825z\" (UniqueName: \"kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749357 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.749972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfc8\" (UniqueName: \"kubernetes.io/projected/b8bda802-a296-4c69-b1ee-07a238912c81-kube-api-access-xkfc8\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750093 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8bda802-a296-4c69-b1ee-07a238912c81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750112 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750175 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bda802-a296-4c69-b1ee-07a238912c81-logs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.750287 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.759625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8bda802-a296-4c69-b1ee-07a238912c81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.761047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bda802-a296-4c69-b1ee-07a238912c81-logs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.769465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.770072 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-scripts\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.770468 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.770577 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.771663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.772331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.774352 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.778816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.779552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.787243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.794288 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.801220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bda802-a296-4c69-b1ee-07a238912c81-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.804239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfc8\" (UniqueName: \"kubernetes.io/projected/b8bda802-a296-4c69-b1ee-07a238912c81-kube-api-access-xkfc8\") pod \"cinder-api-0\" (UID: \"b8bda802-a296-4c69-b1ee-07a238912c81\") " pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.804548 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s825z\" (UniqueName: \"kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z\") pod \"ceilometer-0\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " pod="openstack/ceilometer-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.963531 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:13:30 crc kubenswrapper[4985]: I0127 09:13:30.975160 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.029650 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.163781 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.163858 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.163903 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.164082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.164121 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.164163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh49f\" (UniqueName: \"kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f\") pod \"91ba4f05-fb65-42a0-a26f-c369615b0de3\" (UID: \"91ba4f05-fb65-42a0-a26f-c369615b0de3\") " Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.165699 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.182782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts" (OuterVolumeSpecName: "scripts") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.185497 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f" (OuterVolumeSpecName: "kube-api-access-dh49f") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "kube-api-access-dh49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.193053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.268119 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91ba4f05-fb65-42a0-a26f-c369615b0de3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.268634 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh49f\" (UniqueName: \"kubernetes.io/projected/91ba4f05-fb65-42a0-a26f-c369615b0de3-kube-api-access-dh49f\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.268650 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.268659 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.284730 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.305722 4985 generic.go:334] "Generic (PLEG): container finished" podID="fdc13ae9-0535-44cc-832d-4c22f662cfc7" containerID="e18ff5d92f1832b65df632606823de68214ed87d027d18c73966c96b2003c09d" exitCode=0 Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.306330 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c44-account-create-update-hwhwd" event={"ID":"fdc13ae9-0535-44cc-832d-4c22f662cfc7","Type":"ContainerDied","Data":"e18ff5d92f1832b65df632606823de68214ed87d027d18c73966c96b2003c09d"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.333098 4985 generic.go:334] "Generic (PLEG): container finished" podID="4581f336-5495-48a5-b6a0-d35ea0818a50" containerID="784f17d774a1d9932ee223f3275e0b98c53f858c35331b17072ca0bb01b4e3b4" exitCode=0 Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.333171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cxvr" event={"ID":"4581f336-5495-48a5-b6a0-d35ea0818a50","Type":"ContainerDied","Data":"784f17d774a1d9932ee223f3275e0b98c53f858c35331b17072ca0bb01b4e3b4"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.349356 4985 generic.go:334] "Generic (PLEG): container finished" podID="4a211599-7c44-4939-8141-69dda5389ca7" containerID="0d04c042e8fa4aae2d76b5c1dd9b9d55ee7af630555b0b6493eae9be6ed9e06c" exitCode=0 Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.349445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" event={"ID":"4a211599-7c44-4939-8141-69dda5389ca7","Type":"ContainerDied","Data":"0d04c042e8fa4aae2d76b5c1dd9b9d55ee7af630555b0b6493eae9be6ed9e06c"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.357916 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data" (OuterVolumeSpecName: "config-data") pod "91ba4f05-fb65-42a0-a26f-c369615b0de3" (UID: "91ba4f05-fb65-42a0-a26f-c369615b0de3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.371277 4985 generic.go:334] "Generic (PLEG): container finished" podID="1de7d9fd-8ea7-4a62-8325-627343d4c2b3" containerID="2a3a7f684a4115fb53cddc0b056e8f2eec55b8fbe1c69e63c37205eccf7dab22" exitCode=0 Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.371304 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.371494 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ba4f05-fb65-42a0-a26f-c369615b0de3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.371791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xqrpv" event={"ID":"1de7d9fd-8ea7-4a62-8325-627343d4c2b3","Type":"ContainerDied","Data":"2a3a7f684a4115fb53cddc0b056e8f2eec55b8fbe1c69e63c37205eccf7dab22"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.377799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l8v4l" event={"ID":"1c906eea-e955-4881-8244-4f4cd7b84bf0","Type":"ContainerStarted","Data":"6bf0794b2ef66d7a7d7f1e712c2278965c5f1b318acdc9690e17b20ede9203ae"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.377887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l8v4l" event={"ID":"1c906eea-e955-4881-8244-4f4cd7b84bf0","Type":"ContainerStarted","Data":"ba6a05cc1066c37faae90c203f8c487e777e839d24b51da9701cdcf893058580"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.383729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerStarted","Data":"342718c9c057fc90aa6c429aff3b351bd0d359aa618f143a3adf6e3f1842c3f9"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.383791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerStarted","Data":"64fcec0b6ec066d51fdddff72452a8eb5d2f8cd7bc267c5d7d3eb4e111e2c265"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.402550 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" event={"ID":"49e0a278-f0bb-4b42-8c22-39b1b25c85af","Type":"ContainerStarted","Data":"a5b8da9105508d5deab6637ff6ee8adae95123f02f44ae17c646964f2c0e1f56"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.402604 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" event={"ID":"49e0a278-f0bb-4b42-8c22-39b1b25c85af","Type":"ContainerStarted","Data":"69973f736eeec773ef48bd8bb447d17ae532bc228b7bd48ff86d29ad098bb3c4"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.409575 4985 generic.go:334] "Generic (PLEG): container finished" podID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerID="41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85" exitCode=0 Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.409692 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.409870 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerDied","Data":"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.409977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"91ba4f05-fb65-42a0-a26f-c369615b0de3","Type":"ContainerDied","Data":"871e10fa54f855dd749d6db056c3c54c856703ea7fd9e281693b786103479604"} Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.410016 4985 scope.go:117] "RemoveContainer" containerID="f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.423127 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-l8v4l" podStartSLOduration=4.423106391 podStartE2EDuration="4.423106391s" podCreationTimestamp="2026-01-27 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:31.414584687 +0000 UTC m=+1195.705679528" watchObservedRunningTime="2026-01-27 09:13:31.423106391 +0000 UTC m=+1195.714201232" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.449655 4985 scope.go:117] "RemoveContainer" containerID="41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.484266 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" podStartSLOduration=4.484214357 podStartE2EDuration="4.484214357s" podCreationTimestamp="2026-01-27 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:31.450213045 +0000 UTC m=+1195.741307886" watchObservedRunningTime="2026-01-27 09:13:31.484214357 +0000 UTC m=+1195.775309208" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.490115 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.509150 4985 scope.go:117] "RemoveContainer" containerID="f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873" Jan 27 09:13:31 crc kubenswrapper[4985]: E0127 09:13:31.511318 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873\": container with ID starting with f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873 not found: ID does not exist" containerID="f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.511397 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873"} err="failed to get container status \"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873\": rpc error: code = NotFound desc = could not find container \"f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873\": container with ID starting with f0fa1e8a413021f582016ef30ae3e2b617627ea24db658627809e945fa43e873 not found: ID does not exist" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.511431 4985 scope.go:117] "RemoveContainer" containerID="41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85" Jan 27 09:13:31 crc kubenswrapper[4985]: E0127 09:13:31.512048 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85\": container with ID starting with 41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85 not found: ID does not exist" containerID="41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.512129 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85"} err="failed to get container status \"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85\": rpc error: code = NotFound desc = could not find container \"41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85\": container with ID starting with 41a18f0fc6075f60284511c443f35c4a81884696dce53370d396d2ae0cdcef85 not found: ID does not exist" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.516857 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.529698 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: E0127 09:13:31.530576 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="cinder-scheduler" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.530598 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="cinder-scheduler" Jan 27 09:13:31 crc kubenswrapper[4985]: E0127 09:13:31.530665 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="probe" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.530674 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="probe" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.530892 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="cinder-scheduler" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.530907 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" containerName="probe" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.532665 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.534981 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.548477 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.581921 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c1ff98e-211c-421d-9fcc-3357afdf8639-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.582075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-scripts\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.582118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.582198 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.582253 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.582286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qzp\" (UniqueName: \"kubernetes.io/projected/1c1ff98e-211c-421d-9fcc-3357afdf8639-kube-api-access-r7qzp\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-scripts\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684259 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684292 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684315 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qzp\" (UniqueName: \"kubernetes.io/projected/1c1ff98e-211c-421d-9fcc-3357afdf8639-kube-api-access-r7qzp\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c1ff98e-211c-421d-9fcc-3357afdf8639-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.684436 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c1ff98e-211c-421d-9fcc-3357afdf8639-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.688373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-scripts\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.688892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.689301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.703499 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qzp\" (UniqueName: \"kubernetes.io/projected/1c1ff98e-211c-421d-9fcc-3357afdf8639-kube-api-access-r7qzp\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.704290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1ff98e-211c-421d-9fcc-3357afdf8639-config-data\") pod \"cinder-scheduler-0\" (UID: \"1c1ff98e-211c-421d-9fcc-3357afdf8639\") " pod="openstack/cinder-scheduler-0" Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.834441 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.844409 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:31 crc kubenswrapper[4985]: I0127 09:13:31.928623 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.472927 4985 generic.go:334] "Generic (PLEG): container finished" podID="49e0a278-f0bb-4b42-8c22-39b1b25c85af" containerID="a5b8da9105508d5deab6637ff6ee8adae95123f02f44ae17c646964f2c0e1f56" exitCode=0 Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.476095 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e4b0e68-ecc6-41aa-975a-14094de6ae67" path="/var/lib/kubelet/pods/6e4b0e68-ecc6-41aa-975a-14094de6ae67/volumes" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.492015 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ba4f05-fb65-42a0-a26f-c369615b0de3" path="/var/lib/kubelet/pods/91ba4f05-fb65-42a0-a26f-c369615b0de3/volumes" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.493777 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.493904 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerStarted","Data":"748acfd8e27e8c9194c243af132239721e57d17fd25a82b28fd44f59433e3ec5"} Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.494001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" event={"ID":"49e0a278-f0bb-4b42-8c22-39b1b25c85af","Type":"ContainerDied","Data":"a5b8da9105508d5deab6637ff6ee8adae95123f02f44ae17c646964f2c0e1f56"} Jan 27 09:13:32 crc kubenswrapper[4985]: W0127 09:13:32.509582 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c1ff98e_211c_421d_9fcc_3357afdf8639.slice/crio-d3551cdfcc387dcdc641c4a120f983c1ed33ea5f277ea491d9cc8d92844bc8c4 WatchSource:0}: Error finding container d3551cdfcc387dcdc641c4a120f983c1ed33ea5f277ea491d9cc8d92844bc8c4: Status 404 returned error can't find the container with id d3551cdfcc387dcdc641c4a120f983c1ed33ea5f277ea491d9cc8d92844bc8c4 Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.511229 4985 generic.go:334] "Generic (PLEG): container finished" podID="1c906eea-e955-4881-8244-4f4cd7b84bf0" containerID="6bf0794b2ef66d7a7d7f1e712c2278965c5f1b318acdc9690e17b20ede9203ae" exitCode=0 Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.511293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l8v4l" event={"ID":"1c906eea-e955-4881-8244-4f4cd7b84bf0","Type":"ContainerDied","Data":"6bf0794b2ef66d7a7d7f1e712c2278965c5f1b318acdc9690e17b20ede9203ae"} Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.531486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8bda802-a296-4c69-b1ee-07a238912c81","Type":"ContainerStarted","Data":"6accc709f98e5c6fe8d230611476cc046302f07935f4d9cd61fb4680f9c5ab1f"} Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.548421 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerStarted","Data":"a67e828b20cd21bdb6367204032201ce9c5a8ade30c74546a82181c8f88b063f"} Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.548781 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.548809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.588089 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-689489568f-6ggjw" podStartSLOduration=10.58805483 podStartE2EDuration="10.58805483s" podCreationTimestamp="2026-01-27 09:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:32.575002763 +0000 UTC m=+1196.866097604" watchObservedRunningTime="2026-01-27 09:13:32.58805483 +0000 UTC m=+1196.879149661" Jan 27 09:13:32 crc kubenswrapper[4985]: I0127 09:13:32.646151 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c57bbbf74-nrsd9" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.027310 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.038660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.117784 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.205605 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kp58\" (UniqueName: \"kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.205714 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.206016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.206075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.206146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.206187 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.206303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.271202 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kp58\" (UniqueName: \"kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310658 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310695 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.310723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.329167 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.355877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kp58\" (UniqueName: \"kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.368446 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.368583 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.372403 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.403369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.403526 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs\") pod \"neutron-6798f6b777-jp82x\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.412250 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.420973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts\") pod \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.421080 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6wz\" (UniqueName: \"kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz\") pod \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\" (UID: \"fdc13ae9-0535-44cc-832d-4c22f662cfc7\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.422438 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdc13ae9-0535-44cc-832d-4c22f662cfc7" (UID: "fdc13ae9-0535-44cc-832d-4c22f662cfc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.427722 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz" (OuterVolumeSpecName: "kube-api-access-lj6wz") pod "fdc13ae9-0535-44cc-832d-4c22f662cfc7" (UID: "fdc13ae9-0535-44cc-832d-4c22f662cfc7"). InnerVolumeSpecName "kube-api-access-lj6wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.507961 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.525754 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc13ae9-0535-44cc-832d-4c22f662cfc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.525788 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6wz\" (UniqueName: \"kubernetes.io/projected/fdc13ae9-0535-44cc-832d-4c22f662cfc7-kube-api-access-lj6wz\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.625644 4985 generic.go:334] "Generic (PLEG): container finished" podID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerID="d4c75075010687e0dcce9874d61f77c496648b9a79868e5303597d3792b9a8a4" exitCode=0 Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.625751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerDied","Data":"d4c75075010687e0dcce9874d61f77c496648b9a79868e5303597d3792b9a8a4"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.626965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts\") pod \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.632215 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngwfz\" (UniqueName: \"kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz\") pod \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\" (UID: \"1de7d9fd-8ea7-4a62-8325-627343d4c2b3\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.636195 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1de7d9fd-8ea7-4a62-8325-627343d4c2b3" (UID: "1de7d9fd-8ea7-4a62-8325-627343d4c2b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.643226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8bda802-a296-4c69-b1ee-07a238912c81","Type":"ContainerStarted","Data":"9110d4bedf746be70a7f2864d44c94112f658809ca1025b8d502441c6aaac5aa"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.645264 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz" (OuterVolumeSpecName: "kube-api-access-ngwfz") pod "1de7d9fd-8ea7-4a62-8325-627343d4c2b3" (UID: "1de7d9fd-8ea7-4a62-8325-627343d4c2b3"). InnerVolumeSpecName "kube-api-access-ngwfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.657178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerStarted","Data":"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.668334 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9c44-account-create-update-hwhwd" event={"ID":"fdc13ae9-0535-44cc-832d-4c22f662cfc7","Type":"ContainerDied","Data":"e6c231f5620ab30501d9b71078d715837105df6bca888549e5729e5b27c66fff"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.668384 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c231f5620ab30501d9b71078d715837105df6bca888549e5729e5b27c66fff" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.668448 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9c44-account-create-update-hwhwd" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.710924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1c1ff98e-211c-421d-9fcc-3357afdf8639","Type":"ContainerStarted","Data":"d3551cdfcc387dcdc641c4a120f983c1ed33ea5f277ea491d9cc8d92844bc8c4"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.721558 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.721702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cxvr" event={"ID":"4581f336-5495-48a5-b6a0-d35ea0818a50","Type":"ContainerDied","Data":"5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.721734 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eedde42b9d58a18a8d3c9840735343240cc2170f7d0b065ad5bd7d53de9016f" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.724419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xqrpv" event={"ID":"1de7d9fd-8ea7-4a62-8325-627343d4c2b3","Type":"ContainerDied","Data":"4e61b3db466dca5100f51e9355a8b0bfb35944f50df01d9865f2dfd2b3833268"} Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.724468 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e61b3db466dca5100f51e9355a8b0bfb35944f50df01d9865f2dfd2b3833268" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.724551 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xqrpv" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.734428 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngwfz\" (UniqueName: \"kubernetes.io/projected/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-kube-api-access-ngwfz\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.734457 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de7d9fd-8ea7-4a62-8325-627343d4c2b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.842040 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts\") pod \"4581f336-5495-48a5-b6a0-d35ea0818a50\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.842376 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rvt5\" (UniqueName: \"kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5\") pod \"4581f336-5495-48a5-b6a0-d35ea0818a50\" (UID: \"4581f336-5495-48a5-b6a0-d35ea0818a50\") " Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.843084 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4581f336-5495-48a5-b6a0-d35ea0818a50" (UID: "4581f336-5495-48a5-b6a0-d35ea0818a50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.843405 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4581f336-5495-48a5-b6a0-d35ea0818a50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.860726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5" (OuterVolumeSpecName: "kube-api-access-6rvt5") pod "4581f336-5495-48a5-b6a0-d35ea0818a50" (UID: "4581f336-5495-48a5-b6a0-d35ea0818a50"). InnerVolumeSpecName "kube-api-access-6rvt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.940908 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.946143 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rvt5\" (UniqueName: \"kubernetes.io/projected/4581f336-5495-48a5-b6a0-d35ea0818a50-kube-api-access-6rvt5\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:33 crc kubenswrapper[4985]: I0127 09:13:33.985992 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.049228 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs\") pod \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.049909 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rxlb\" (UniqueName: \"kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb\") pod \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.049954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle\") pod \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.049985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config\") pod \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.051890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config\") pod \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\" (UID: \"6e5ea4de-6280-4b44-9dfc-e27da3483c4f\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.068408 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb" (OuterVolumeSpecName: "kube-api-access-4rxlb") pod "6e5ea4de-6280-4b44-9dfc-e27da3483c4f" (UID: "6e5ea4de-6280-4b44-9dfc-e27da3483c4f"). InnerVolumeSpecName "kube-api-access-4rxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.076193 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6e5ea4de-6280-4b44-9dfc-e27da3483c4f" (UID: "6e5ea4de-6280-4b44-9dfc-e27da3483c4f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.143871 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config" (OuterVolumeSpecName: "config") pod "6e5ea4de-6280-4b44-9dfc-e27da3483c4f" (UID: "6e5ea4de-6280-4b44-9dfc-e27da3483c4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.154280 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw298\" (UniqueName: \"kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298\") pod \"4a211599-7c44-4939-8141-69dda5389ca7\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.154713 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts\") pod \"4a211599-7c44-4939-8141-69dda5389ca7\" (UID: \"4a211599-7c44-4939-8141-69dda5389ca7\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.155262 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rxlb\" (UniqueName: \"kubernetes.io/projected/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-kube-api-access-4rxlb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.155281 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.155291 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.156702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a211599-7c44-4939-8141-69dda5389ca7" (UID: "4a211599-7c44-4939-8141-69dda5389ca7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.161365 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298" (OuterVolumeSpecName: "kube-api-access-jw298") pod "4a211599-7c44-4939-8141-69dda5389ca7" (UID: "4a211599-7c44-4939-8141-69dda5389ca7"). InnerVolumeSpecName "kube-api-access-jw298". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.243303 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e5ea4de-6280-4b44-9dfc-e27da3483c4f" (UID: "6e5ea4de-6280-4b44-9dfc-e27da3483c4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.244359 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.259159 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.259218 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw298\" (UniqueName: \"kubernetes.io/projected/4a211599-7c44-4939-8141-69dda5389ca7-kube-api-access-jw298\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.259232 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a211599-7c44-4939-8141-69dda5389ca7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.262224 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.267494 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6e5ea4de-6280-4b44-9dfc-e27da3483c4f" (UID: "6e5ea4de-6280-4b44-9dfc-e27da3483c4f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.362585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts\") pod \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.362671 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs22q\" (UniqueName: \"kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q\") pod \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\" (UID: \"49e0a278-f0bb-4b42-8c22-39b1b25c85af\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.363033 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5ea4de-6280-4b44-9dfc-e27da3483c4f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.364293 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49e0a278-f0bb-4b42-8c22-39b1b25c85af" (UID: "49e0a278-f0bb-4b42-8c22-39b1b25c85af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.379082 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q" (OuterVolumeSpecName: "kube-api-access-rs22q") pod "49e0a278-f0bb-4b42-8c22-39b1b25c85af" (UID: "49e0a278-f0bb-4b42-8c22-39b1b25c85af"). InnerVolumeSpecName "kube-api-access-rs22q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.450595 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.464467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts\") pod \"1c906eea-e955-4881-8244-4f4cd7b84bf0\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.464570 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqrjg\" (UniqueName: \"kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg\") pod \"1c906eea-e955-4881-8244-4f4cd7b84bf0\" (UID: \"1c906eea-e955-4881-8244-4f4cd7b84bf0\") " Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.465041 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0a278-f0bb-4b42-8c22-39b1b25c85af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.465058 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs22q\" (UniqueName: \"kubernetes.io/projected/49e0a278-f0bb-4b42-8c22-39b1b25c85af-kube-api-access-rs22q\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.466896 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c906eea-e955-4881-8244-4f4cd7b84bf0" (UID: "1c906eea-e955-4881-8244-4f4cd7b84bf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.481717 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg" (OuterVolumeSpecName: "kube-api-access-vqrjg") pod "1c906eea-e955-4881-8244-4f4cd7b84bf0" (UID: "1c906eea-e955-4881-8244-4f4cd7b84bf0"). InnerVolumeSpecName "kube-api-access-vqrjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506115 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8569774db7-5qrp6"] Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506467 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-httpd" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506485 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-httpd" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506498 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4581f336-5495-48a5-b6a0-d35ea0818a50" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506522 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4581f336-5495-48a5-b6a0-d35ea0818a50" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506533 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de7d9fd-8ea7-4a62-8325-627343d4c2b3" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506539 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de7d9fd-8ea7-4a62-8325-627343d4c2b3" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506552 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c906eea-e955-4881-8244-4f4cd7b84bf0" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506559 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c906eea-e955-4881-8244-4f4cd7b84bf0" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506571 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc13ae9-0535-44cc-832d-4c22f662cfc7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506577 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc13ae9-0535-44cc-832d-4c22f662cfc7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506589 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e0a278-f0bb-4b42-8c22-39b1b25c85af" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506596 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e0a278-f0bb-4b42-8c22-39b1b25c85af" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506607 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a211599-7c44-4939-8141-69dda5389ca7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506614 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a211599-7c44-4939-8141-69dda5389ca7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: E0127 09:13:34.506629 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-api" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506636 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-api" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506807 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de7d9fd-8ea7-4a62-8325-627343d4c2b3" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506819 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a211599-7c44-4939-8141-69dda5389ca7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506831 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c906eea-e955-4881-8244-4f4cd7b84bf0" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506844 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-httpd" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506855 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc13ae9-0535-44cc-832d-4c22f662cfc7" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506867 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4581f336-5495-48a5-b6a0-d35ea0818a50" containerName="mariadb-database-create" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506878 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e0a278-f0bb-4b42-8c22-39b1b25c85af" containerName="mariadb-account-create-update" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.506889 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" containerName="neutron-api" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.508076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.570018 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqrjg\" (UniqueName: \"kubernetes.io/projected/1c906eea-e955-4881-8244-4f4cd7b84bf0-kube-api-access-vqrjg\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.570066 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c906eea-e955-4881-8244-4f4cd7b84bf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.586076 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8569774db7-5qrp6"] Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.605167 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:13:34 crc kubenswrapper[4985]: W0127 09:13:34.629667 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aca7d18_9f0b_4c2e_aaef_39fb4d810616.slice/crio-398afcc8404ab8d60022d42429b67bbbd486def4f72390c4cd38172e262adfb8 WatchSource:0}: Error finding container 398afcc8404ab8d60022d42429b67bbbd486def4f72390c4cd38172e262adfb8: Status 404 returned error can't find the container with id 398afcc8404ab8d60022d42429b67bbbd486def4f72390c4cd38172e262adfb8 Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-ovndb-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677775 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6qc4\" (UniqueName: \"kubernetes.io/projected/60dc03ee-3efa-410a-8b38-f8b2eab0807a-kube-api-access-s6qc4\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-internal-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677889 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-httpd-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-combined-ca-bundle\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.677967 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.678030 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-public-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.751165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8bda802-a296-4c69-b1ee-07a238912c81","Type":"ContainerStarted","Data":"05d93ab861a42d5ca497347b4614af45354a14a470de05c0b1ec1eba146ab5b3"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.752068 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-ovndb-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782353 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6qc4\" (UniqueName: \"kubernetes.io/projected/60dc03ee-3efa-410a-8b38-f8b2eab0807a-kube-api-access-s6qc4\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-internal-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-httpd-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782492 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-combined-ca-bundle\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782581 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.782641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-public-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.785863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerStarted","Data":"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.788344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-ovndb-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.791680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" event={"ID":"49e0a278-f0bb-4b42-8c22-39b1b25c85af","Type":"ContainerDied","Data":"69973f736eeec773ef48bd8bb447d17ae532bc228b7bd48ff86d29ad098bb3c4"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.791734 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69973f736eeec773ef48bd8bb447d17ae532bc228b7bd48ff86d29ad098bb3c4" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.791792 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1835-account-create-update-m5p5l" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.800985 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.800967881 podStartE2EDuration="4.800967881s" podCreationTimestamp="2026-01-27 09:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:34.781020864 +0000 UTC m=+1199.072115715" watchObservedRunningTime="2026-01-27 09:13:34.800967881 +0000 UTC m=+1199.092062722" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.803540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-internal-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.803815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1c1ff98e-211c-421d-9fcc-3357afdf8639","Type":"ContainerStarted","Data":"80de338276e5f7514633b9b5a36514ed765c22b6e78312defbf2015f17cf40ca"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.805058 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-httpd-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.806038 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-combined-ca-bundle\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.806870 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-config\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.808269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc03ee-3efa-410a-8b38-f8b2eab0807a-public-tls-certs\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.809978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6qc4\" (UniqueName: \"kubernetes.io/projected/60dc03ee-3efa-410a-8b38-f8b2eab0807a-kube-api-access-s6qc4\") pod \"neutron-8569774db7-5qrp6\" (UID: \"60dc03ee-3efa-410a-8b38-f8b2eab0807a\") " pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.813238 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.813685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ce6a-account-create-update-qc72j" event={"ID":"4a211599-7c44-4939-8141-69dda5389ca7","Type":"ContainerDied","Data":"7e1f0c455319443d0a796ba3e5118d8620a686cc58462bda773e5adcf7578dab"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.813748 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1f0c455319443d0a796ba3e5118d8620a686cc58462bda773e5adcf7578dab" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.817873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85bdc684db-7q85p" event={"ID":"6e5ea4de-6280-4b44-9dfc-e27da3483c4f","Type":"ContainerDied","Data":"29bb41cd322172c3273a0c5bbcd8a796d8dab5adc02c2f450f03d9a94434a1e1"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.817909 4985 scope.go:117] "RemoveContainer" containerID="c8a7377bea9823b710e4fa053c3d56b96c5efbf32603a2894d0ecaaa3a7ea381" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.818040 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85bdc684db-7q85p" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.845950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l8v4l" event={"ID":"1c906eea-e955-4881-8244-4f4cd7b84bf0","Type":"ContainerDied","Data":"ba6a05cc1066c37faae90c203f8c487e777e839d24b51da9701cdcf893058580"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.846277 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6a05cc1066c37faae90c203f8c487e777e839d24b51da9701cdcf893058580" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.846393 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l8v4l" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.857591 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerStarted","Data":"398afcc8404ab8d60022d42429b67bbbd486def4f72390c4cd38172e262adfb8"} Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.858127 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cxvr" Jan 27 09:13:34 crc kubenswrapper[4985]: I0127 09:13:34.941104 4985 scope.go:117] "RemoveContainer" containerID="d4c75075010687e0dcce9874d61f77c496648b9a79868e5303597d3792b9a8a4" Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.082177 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.194469 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.197329 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85bdc684db-7q85p"] Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.698483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8569774db7-5qrp6"] Jan 27 09:13:35 crc kubenswrapper[4985]: W0127 09:13:35.700734 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60dc03ee_3efa_410a_8b38_f8b2eab0807a.slice/crio-53c7f624998a2b24634367a0af54c6489b96a465460329fb369cd98c09c0ae0b WatchSource:0}: Error finding container 53c7f624998a2b24634367a0af54c6489b96a465460329fb369cd98c09c0ae0b: Status 404 returned error can't find the container with id 53c7f624998a2b24634367a0af54c6489b96a465460329fb369cd98c09c0ae0b Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.870825 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1c1ff98e-211c-421d-9fcc-3357afdf8639","Type":"ContainerStarted","Data":"0f6ddad55a3f0131247eb5f267278c7a70fb319d63945403c9ac91ff6f3231d7"} Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.876779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerStarted","Data":"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453"} Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.876862 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerStarted","Data":"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3"} Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.877000 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6798f6b777-jp82x" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-api" containerID="cri-o://e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3" gracePeriod=30 Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.877057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.877110 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6798f6b777-jp82x" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-httpd" containerID="cri-o://70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453" gracePeriod=30 Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.880730 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8569774db7-5qrp6" event={"ID":"60dc03ee-3efa-410a-8b38-f8b2eab0807a","Type":"ContainerStarted","Data":"53c7f624998a2b24634367a0af54c6489b96a465460329fb369cd98c09c0ae0b"} Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.884285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerStarted","Data":"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356"} Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.910264 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.910236293 podStartE2EDuration="4.910236293s" podCreationTimestamp="2026-01-27 09:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:35.891587161 +0000 UTC m=+1200.182682022" watchObservedRunningTime="2026-01-27 09:13:35.910236293 +0000 UTC m=+1200.201331134" Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.926866 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6798f6b777-jp82x" podStartSLOduration=3.926846989 podStartE2EDuration="3.926846989s" podCreationTimestamp="2026-01-27 09:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:35.92031372 +0000 UTC m=+1200.211408581" watchObservedRunningTime="2026-01-27 09:13:35.926846989 +0000 UTC m=+1200.217941830" Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.983938 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.984171 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-log" containerID="cri-o://9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589" gracePeriod=30 Jan 27 09:13:35 crc kubenswrapper[4985]: I0127 09:13:35.984462 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-httpd" containerID="cri-o://8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03" gracePeriod=30 Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.470288 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5ea4de-6280-4b44-9dfc-e27da3483c4f" path="/var/lib/kubelet/pods/6e5ea4de-6280-4b44-9dfc-e27da3483c4f/volumes" Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.930588 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.950067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerStarted","Data":"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752"} Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.950142 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.957876 4985 generic.go:334] "Generic (PLEG): container finished" podID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerID="e16ecc5391723ec866b22379c3eff871778d1029f7535362b6bf0ab919a57d0c" exitCode=137 Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.957955 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerDied","Data":"e16ecc5391723ec866b22379c3eff871778d1029f7535362b6bf0ab919a57d0c"} Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.966950 4985 generic.go:334] "Generic (PLEG): container finished" podID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerID="9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589" exitCode=143 Jan 27 09:13:36 crc kubenswrapper[4985]: I0127 09:13:36.967022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerDied","Data":"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589"} Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.010821 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.423787405 podStartE2EDuration="7.010798377s" podCreationTimestamp="2026-01-27 09:13:30 +0000 UTC" firstStartedPulling="2026-01-27 09:13:31.853221777 +0000 UTC m=+1196.144316618" lastFinishedPulling="2026-01-27 09:13:36.440232749 +0000 UTC m=+1200.731327590" observedRunningTime="2026-01-27 09:13:37.005709567 +0000 UTC m=+1201.296804408" watchObservedRunningTime="2026-01-27 09:13:37.010798377 +0000 UTC m=+1201.301893218" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.018081 4985 generic.go:334] "Generic (PLEG): container finished" podID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerID="70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453" exitCode=0 Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.018187 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerDied","Data":"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453"} Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.035123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8569774db7-5qrp6" event={"ID":"60dc03ee-3efa-410a-8b38-f8b2eab0807a","Type":"ContainerStarted","Data":"f34c6731f5bb41fb0ea57eac2f22be5b1bfbfa72e8713b71e4395f12b44068d9"} Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.035193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8569774db7-5qrp6" event={"ID":"60dc03ee-3efa-410a-8b38-f8b2eab0807a","Type":"ContainerStarted","Data":"2eca6e8435392d16400db43e0c51d5b0cea324ad140fb1f0208430326e181e88"} Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.035946 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.060017 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8569774db7-5qrp6" podStartSLOduration=3.059995807 podStartE2EDuration="3.059995807s" podCreationTimestamp="2026-01-27 09:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:37.057728164 +0000 UTC m=+1201.348823005" watchObservedRunningTime="2026-01-27 09:13:37.059995807 +0000 UTC m=+1201.351090648" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.153210 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.253916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254218 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnxpm\" (UniqueName: \"kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254398 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs" (OuterVolumeSpecName: "logs") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254542 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254653 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254866 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.254993 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.255083 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data\") pod \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\" (UID: \"5fbbc8b9-e978-4565-9d19-bd139f2c4df7\") " Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.255750 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.260500 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.261957 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm" (OuterVolumeSpecName: "kube-api-access-lnxpm") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "kube-api-access-lnxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.282771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts" (OuterVolumeSpecName: "scripts") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.291530 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data" (OuterVolumeSpecName: "config-data") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.298666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.315870 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5fbbc8b9-e978-4565-9d19-bd139f2c4df7" (UID: "5fbbc8b9-e978-4565-9d19-bd139f2c4df7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357752 4985 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357799 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357811 4985 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357823 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357834 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.357845 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnxpm\" (UniqueName: \"kubernetes.io/projected/5fbbc8b9-e978-4565-9d19-bd139f2c4df7-kube-api-access-lnxpm\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.480451 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.480710 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-log" containerID="cri-o://c485247ca576864cbc121bc64cf3b79601f66789ddb28ba9c38e4b3a30e079f9" gracePeriod=30 Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.480808 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-httpd" containerID="cri-o://b8f5d57fdfa411a702d4ed40dde405c4c89d98dc4aafc021724c96b6944f981c" gracePeriod=30 Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.825341 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.934114 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:37 crc kubenswrapper[4985]: I0127 09:13:37.936352 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.076928 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerID="c485247ca576864cbc121bc64cf3b79601f66789ddb28ba9c38e4b3a30e079f9" exitCode=143 Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.077041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerDied","Data":"c485247ca576864cbc121bc64cf3b79601f66789ddb28ba9c38e4b3a30e079f9"} Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.097590 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c57bbbf74-nrsd9" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.098455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c57bbbf74-nrsd9" event={"ID":"5fbbc8b9-e978-4565-9d19-bd139f2c4df7","Type":"ContainerDied","Data":"bc4c0a1d1bc9d272d56b4ebccd3ddd9d2e7528621bea617664c16b676e385638"} Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.098504 4985 scope.go:117] "RemoveContainer" containerID="f685f5d57bf90797e6960a0da540e2156808e3702d029e5231792e91efc492ec" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.139463 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dbnnn"] Jan 27 09:13:38 crc kubenswrapper[4985]: E0127 09:13:38.139947 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon-log" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.139973 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon-log" Jan 27 09:13:38 crc kubenswrapper[4985]: E0127 09:13:38.140014 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.140023 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.140233 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.140259 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" containerName="horizon-log" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.141610 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.145189 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.145391 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bvf4t" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.145527 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.167325 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dbnnn"] Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.176076 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.186818 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c57bbbf74-nrsd9"] Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.278910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.279005 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.279264 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf54l\" (UniqueName: \"kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.279307 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.318773 4985 scope.go:117] "RemoveContainer" containerID="e16ecc5391723ec866b22379c3eff871778d1029f7535362b6bf0ab919a57d0c" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.381380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.381437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.381487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf54l\" (UniqueName: \"kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.381524 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.387755 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.403662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.403703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf54l\" (UniqueName: \"kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.403722 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts\") pod \"nova-cell0-conductor-db-sync-dbnnn\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.485320 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:13:38 crc kubenswrapper[4985]: I0127 09:13:38.495490 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbbc8b9-e978-4565-9d19-bd139f2c4df7" path="/var/lib/kubelet/pods/5fbbc8b9-e978-4565-9d19-bd139f2c4df7/volumes" Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.008195 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dbnnn"] Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.109213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" event={"ID":"488cf0d5-caf5-4a7c-966c-233b758c0dcd","Type":"ContainerStarted","Data":"7ead77a08a069bd538aa4fa04b6108aebbff021d06386d6d778dc43e5b8ef56f"} Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.109786 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-central-agent" containerID="cri-o://f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c" gracePeriod=30 Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.109837 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="proxy-httpd" containerID="cri-o://1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752" gracePeriod=30 Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.109838 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="sg-core" containerID="cri-o://cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356" gracePeriod=30 Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.109834 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-notification-agent" containerID="cri-o://52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48" gracePeriod=30 Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.772874 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.948445 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.948943 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njg7f\" (UniqueName: \"kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949083 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949185 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.949266 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data\") pod \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\" (UID: \"8131140c-f2fe-4495-8db7-d4ca6c2712a5\") " Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.953434 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.954060 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs" (OuterVolumeSpecName: "logs") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.964953 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts" (OuterVolumeSpecName: "scripts") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:39 crc kubenswrapper[4985]: I0127 09:13:39.990180 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f" (OuterVolumeSpecName: "kube-api-access-njg7f") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "kube-api-access-njg7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.006140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.008990 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.050140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052297 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njg7f\" (UniqueName: \"kubernetes.io/projected/8131140c-f2fe-4495-8db7-d4ca6c2712a5-kube-api-access-njg7f\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052321 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052330 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052341 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052350 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131140c-f2fe-4495-8db7-d4ca6c2712a5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052370 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.052378 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.065610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data" (OuterVolumeSpecName: "config-data") pod "8131140c-f2fe-4495-8db7-d4ca6c2712a5" (UID: "8131140c-f2fe-4495-8db7-d4ca6c2712a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.097374 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139098 4985 generic.go:334] "Generic (PLEG): container finished" podID="564992d4-5b88-4124-9cfa-8ee67386599d" containerID="1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752" exitCode=0 Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139141 4985 generic.go:334] "Generic (PLEG): container finished" podID="564992d4-5b88-4124-9cfa-8ee67386599d" containerID="cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356" exitCode=2 Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139151 4985 generic.go:334] "Generic (PLEG): container finished" podID="564992d4-5b88-4124-9cfa-8ee67386599d" containerID="52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48" exitCode=0 Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139198 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerDied","Data":"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752"} Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139231 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerDied","Data":"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356"} Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.139244 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerDied","Data":"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48"} Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.146223 4985 generic.go:334] "Generic (PLEG): container finished" podID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerID="8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03" exitCode=0 Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.146259 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerDied","Data":"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03"} Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.146293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8131140c-f2fe-4495-8db7-d4ca6c2712a5","Type":"ContainerDied","Data":"f6084fc3fc50ec8a1b5fdfe46a41e503fd6fd270dcfd281700151e1020b92487"} Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.146317 4985 scope.go:117] "RemoveContainer" containerID="8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.146482 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.155715 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.155747 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131140c-f2fe-4495-8db7-d4ca6c2712a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.187346 4985 scope.go:117] "RemoveContainer" containerID="9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.192455 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.213750 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.221283 4985 scope.go:117] "RemoveContainer" containerID="8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03" Jan 27 09:13:40 crc kubenswrapper[4985]: E0127 09:13:40.224697 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03\": container with ID starting with 8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03 not found: ID does not exist" containerID="8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.224783 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03"} err="failed to get container status \"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03\": rpc error: code = NotFound desc = could not find container \"8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03\": container with ID starting with 8d314c7a2be79e7137ae4149ba54d7530ad53ffb6447d2b9f17f5048490fbd03 not found: ID does not exist" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.224837 4985 scope.go:117] "RemoveContainer" containerID="9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589" Jan 27 09:13:40 crc kubenswrapper[4985]: E0127 09:13:40.225207 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589\": container with ID starting with 9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589 not found: ID does not exist" containerID="9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.225246 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589"} err="failed to get container status \"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589\": rpc error: code = NotFound desc = could not find container \"9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589\": container with ID starting with 9c9031162b556e6ec43838485fb19772ef24ec05d28d6f063c4d687d99121589 not found: ID does not exist" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.237820 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:40 crc kubenswrapper[4985]: E0127 09:13:40.238483 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-log" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.238506 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-log" Jan 27 09:13:40 crc kubenswrapper[4985]: E0127 09:13:40.238552 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-httpd" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.238562 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-httpd" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.238829 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-httpd" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.238865 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" containerName="glance-log" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.240499 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.251300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.254335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.257638 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360548 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360720 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fz2\" (UniqueName: \"kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.360981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.462372 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463085 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fz2\" (UniqueName: \"kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463368 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463705 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.463809 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.464482 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.464908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.467294 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8131140c-f2fe-4495-8db7-d4ca6c2712a5" path="/var/lib/kubelet/pods/8131140c-f2fe-4495-8db7-d4ca6c2712a5/volumes" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.470629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.472214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.474107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.479315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.483970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fz2\" (UniqueName: \"kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.511344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.584018 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:40 crc kubenswrapper[4985]: I0127 09:13:40.748095 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:41 crc kubenswrapper[4985]: I0127 09:13:41.231778 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:41 crc kubenswrapper[4985]: I0127 09:13:41.828439 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:13:41 crc kubenswrapper[4985]: I0127 09:13:41.828928 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.188029 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerID="b8f5d57fdfa411a702d4ed40dde405c4c89d98dc4aafc021724c96b6944f981c" exitCode=0 Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.188675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerDied","Data":"b8f5d57fdfa411a702d4ed40dde405c4c89d98dc4aafc021724c96b6944f981c"} Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.210758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerStarted","Data":"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5"} Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.210860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerStarted","Data":"ff6cad7faa6f7844cf904ca31eca600d3390b4d56c260e9b6e35b985c7d10868"} Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.323600 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.640051 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721396 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721451 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721546 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rmw8\" (UniqueName: \"kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721566 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721746 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721777 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.721835 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts\") pod \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\" (UID: \"1bf8a06c-1e18-40f8-bcde-5996d4f80767\") " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.722143 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.725193 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs" (OuterVolumeSpecName: "logs") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.738357 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8" (OuterVolumeSpecName: "kube-api-access-4rmw8") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "kube-api-access-4rmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.748697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.764126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts" (OuterVolumeSpecName: "scripts") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.824144 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.825040 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.825131 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bf8a06c-1e18-40f8-bcde-5996d4f80767-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.825210 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rmw8\" (UniqueName: \"kubernetes.io/projected/1bf8a06c-1e18-40f8-bcde-5996d4f80767-kube-api-access-4rmw8\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.825317 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.899788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.930547 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.953838 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.960862 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:42 crc kubenswrapper[4985]: I0127 09:13:42.988412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data" (OuterVolumeSpecName: "config-data") pod "1bf8a06c-1e18-40f8-bcde-5996d4f80767" (UID: "1bf8a06c-1e18-40f8-bcde-5996d4f80767"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.033715 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.033761 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.033776 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf8a06c-1e18-40f8-bcde-5996d4f80767-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.089553 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.226920 4985 generic.go:334] "Generic (PLEG): container finished" podID="564992d4-5b88-4124-9cfa-8ee67386599d" containerID="f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c" exitCode=0 Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.227026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerDied","Data":"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c"} Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.227065 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"564992d4-5b88-4124-9cfa-8ee67386599d","Type":"ContainerDied","Data":"748acfd8e27e8c9194c243af132239721e57d17fd25a82b28fd44f59433e3ec5"} Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.227090 4985 scope.go:117] "RemoveContainer" containerID="1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.227189 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.233453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bf8a06c-1e18-40f8-bcde-5996d4f80767","Type":"ContainerDied","Data":"37568ce2b910a694715cb83a4e8f2a5b3d64559714c87c8fe546368aad573b44"} Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.233495 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.236750 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.236798 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.236871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.236979 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.237098 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.237166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.237245 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s825z\" (UniqueName: \"kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z\") pod \"564992d4-5b88-4124-9cfa-8ee67386599d\" (UID: \"564992d4-5b88-4124-9cfa-8ee67386599d\") " Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.239055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerStarted","Data":"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1"} Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.239237 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-log" containerID="cri-o://721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" gracePeriod=30 Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.239678 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-httpd" containerID="cri-o://2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" gracePeriod=30 Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.244439 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z" (OuterVolumeSpecName: "kube-api-access-s825z") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "kube-api-access-s825z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.245214 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.245324 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.250726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts" (OuterVolumeSpecName: "scripts") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.270899 4985 scope.go:117] "RemoveContainer" containerID="cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.305170 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.305148524 podStartE2EDuration="3.305148524s" podCreationTimestamp="2026-01-27 09:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:43.288418555 +0000 UTC m=+1207.579513416" watchObservedRunningTime="2026-01-27 09:13:43.305148524 +0000 UTC m=+1207.596243365" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.348027 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.349621 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s825z\" (UniqueName: \"kubernetes.io/projected/564992d4-5b88-4124-9cfa-8ee67386599d-kube-api-access-s825z\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.349647 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.349656 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.349664 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.349673 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/564992d4-5b88-4124-9cfa-8ee67386599d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.445579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data" (OuterVolumeSpecName: "config-data") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.453008 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.497662 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.514855 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.515840 4985 scope.go:117] "RemoveContainer" containerID="52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.516365 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564992d4-5b88-4124-9cfa-8ee67386599d" (UID: "564992d4-5b88-4124-9cfa-8ee67386599d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.525504 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526010 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526038 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526060 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-log" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526068 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-log" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526083 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="sg-core" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526089 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="sg-core" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526104 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="proxy-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526111 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="proxy-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526122 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-notification-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526129 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-notification-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.526146 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-central-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526154 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-central-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526602 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526637 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-notification-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526659 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" containerName="glance-log" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526672 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="proxy-httpd" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526686 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="ceilometer-central-agent" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.526700 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" containerName="sg-core" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.527916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.546606 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.547250 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.554586 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564992d4-5b88-4124-9cfa-8ee67386599d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.574366 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.597629 4985 scope.go:117] "RemoveContainer" containerID="f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.619395 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.644879 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.647691 4985 scope.go:117] "RemoveContainer" containerID="1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.652707 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752\": container with ID starting with 1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752 not found: ID does not exist" containerID="1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.652750 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752"} err="failed to get container status \"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752\": rpc error: code = NotFound desc = could not find container \"1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752\": container with ID starting with 1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752 not found: ID does not exist" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.652782 4985 scope.go:117] "RemoveContainer" containerID="cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.655049 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.655806 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356\": container with ID starting with cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356 not found: ID does not exist" containerID="cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.655836 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356"} err="failed to get container status \"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356\": rpc error: code = NotFound desc = could not find container \"cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356\": container with ID starting with cdd243c0199c27305be61b031e325930bcc8610d585e1867bbe2148f01d16356 not found: ID does not exist" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.655857 4985 scope.go:117] "RemoveContainer" containerID="52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657454 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-logs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657570 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csl4c\" (UniqueName: \"kubernetes.io/projected/6682197b-86c6-4c58-9fa3-4b08340e9464-kube-api-access-csl4c\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657598 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.657928 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.661912 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.670460 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.672739 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48\": container with ID starting with 52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48 not found: ID does not exist" containerID="52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.672871 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48"} err="failed to get container status \"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48\": rpc error: code = NotFound desc = could not find container \"52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48\": container with ID starting with 52bb33bab0e117bd21761d5d5082242d242b4c1c5964f3c995600174452f0d48 not found: ID does not exist" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.672914 4985 scope.go:117] "RemoveContainer" containerID="f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c" Jan 27 09:13:43 crc kubenswrapper[4985]: E0127 09:13:43.673907 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c\": container with ID starting with f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c not found: ID does not exist" containerID="f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.673940 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c"} err="failed to get container status \"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c\": rpc error: code = NotFound desc = could not find container \"f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c\": container with ID starting with f71f8f167aaadc8b6c7ea85dba10022b1e6b7cfa9ab45b33f778596ca8e9542c not found: ID does not exist" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.673975 4985 scope.go:117] "RemoveContainer" containerID="b8f5d57fdfa411a702d4ed40dde405c4c89d98dc4aafc021724c96b6944f981c" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.732273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771319 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771415 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771497 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzq8\" (UniqueName: \"kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771518 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771570 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-logs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771666 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771730 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csl4c\" (UniqueName: \"kubernetes.io/projected/6682197b-86c6-4c58-9fa3-4b08340e9464-kube-api-access-csl4c\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771812 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.771859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.772817 4985 scope.go:117] "RemoveContainer" containerID="c485247ca576864cbc121bc64cf3b79601f66789ddb28ba9c38e4b3a30e079f9" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.773229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-logs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.774140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6682197b-86c6-4c58-9fa3-4b08340e9464-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.776653 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.777438 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.778176 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.783789 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.805465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csl4c\" (UniqueName: \"kubernetes.io/projected/6682197b-86c6-4c58-9fa3-4b08340e9464-kube-api-access-csl4c\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.812182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6682197b-86c6-4c58-9fa3-4b08340e9464-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.833275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6682197b-86c6-4c58-9fa3-4b08340e9464\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876419 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzq8\" (UniqueName: \"kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876789 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.876969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.886190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.888296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.890300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.891237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.894957 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.896261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.900098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:43 crc kubenswrapper[4985]: I0127 09:13:43.903319 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzq8\" (UniqueName: \"kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8\") pod \"ceilometer-0\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " pod="openstack/ceilometer-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.001060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.009994 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.169646 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280114 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3262de3-5394-485b-a572-14d824be6c29" containerID="2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" exitCode=143 Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280155 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3262de3-5394-485b-a572-14d824be6c29" containerID="721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" exitCode=143 Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerDied","Data":"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1"} Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerDied","Data":"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5"} Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280254 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3262de3-5394-485b-a572-14d824be6c29","Type":"ContainerDied","Data":"ff6cad7faa6f7844cf904ca31eca600d3390b4d56c260e9b6e35b985c7d10868"} Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280271 4985 scope.go:117] "RemoveContainer" containerID="2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.280389 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.284905 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.284949 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fz2\" (UniqueName: \"kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285194 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.285300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle\") pod \"c3262de3-5394-485b-a572-14d824be6c29\" (UID: \"c3262de3-5394-485b-a572-14d824be6c29\") " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.287042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs" (OuterVolumeSpecName: "logs") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.288128 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.292359 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts" (OuterVolumeSpecName: "scripts") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.292871 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.301011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2" (OuterVolumeSpecName: "kube-api-access-z5fz2") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "kube-api-access-z5fz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.351426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.358231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data" (OuterVolumeSpecName: "config-data") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.371141 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3262de3-5394-485b-a572-14d824be6c29" (UID: "c3262de3-5394-485b-a572-14d824be6c29"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.383162 4985 scope.go:117] "RemoveContainer" containerID="721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387387 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5fz2\" (UniqueName: \"kubernetes.io/projected/c3262de3-5394-485b-a572-14d824be6c29-kube-api-access-z5fz2\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387426 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387441 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387453 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387490 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387503 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387541 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3262de3-5394-485b-a572-14d824be6c29-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.387553 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3262de3-5394-485b-a572-14d824be6c29-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.418089 4985 scope.go:117] "RemoveContainer" containerID="2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" Jan 27 09:13:44 crc kubenswrapper[4985]: E0127 09:13:44.418803 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1\": container with ID starting with 2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1 not found: ID does not exist" containerID="2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419023 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1"} err="failed to get container status \"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1\": rpc error: code = NotFound desc = could not find container \"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1\": container with ID starting with 2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1 not found: ID does not exist" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419086 4985 scope.go:117] "RemoveContainer" containerID="721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" Jan 27 09:13:44 crc kubenswrapper[4985]: E0127 09:13:44.419457 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5\": container with ID starting with 721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5 not found: ID does not exist" containerID="721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419494 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5"} err="failed to get container status \"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5\": rpc error: code = NotFound desc = could not find container \"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5\": container with ID starting with 721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5 not found: ID does not exist" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419526 4985 scope.go:117] "RemoveContainer" containerID="2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419889 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1"} err="failed to get container status \"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1\": rpc error: code = NotFound desc = could not find container \"2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1\": container with ID starting with 2b728a020a24216d1dfca023a0f4aa95462dc5732ddf4a78adb1945907dd15a1 not found: ID does not exist" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.419929 4985 scope.go:117] "RemoveContainer" containerID="721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.420172 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5"} err="failed to get container status \"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5\": rpc error: code = NotFound desc = could not find container \"721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5\": container with ID starting with 721c9119f0e7b20c506f0a5f43bc84367fc544cdebe32b8326df3e452dedb9f5 not found: ID does not exist" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.420497 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.462106 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf8a06c-1e18-40f8-bcde-5996d4f80767" path="/var/lib/kubelet/pods/1bf8a06c-1e18-40f8-bcde-5996d4f80767/volumes" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.462796 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564992d4-5b88-4124-9cfa-8ee67386599d" path="/var/lib/kubelet/pods/564992d4-5b88-4124-9cfa-8ee67386599d/volumes" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.489366 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.577589 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.652703 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.667171 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.678288 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: E0127 09:13:44.678712 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-httpd" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.678729 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-httpd" Jan 27 09:13:44 crc kubenswrapper[4985]: E0127 09:13:44.678746 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-log" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.678753 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-log" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.678949 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-httpd" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.678963 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3262de3-5394-485b-a572-14d824be6c29" containerName="glance-log" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.679924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.683975 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.684516 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.720879 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.752447 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.799837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.799894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9rdc\" (UniqueName: \"kubernetes.io/projected/8a0474b6-bb48-4c95-8735-07917545a256-kube-api-access-r9rdc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.799981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.800043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.800086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.800108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.800144 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.800310 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-logs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901842 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901863 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901888 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.901952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-logs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.902003 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.902024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9rdc\" (UniqueName: \"kubernetes.io/projected/8a0474b6-bb48-4c95-8735-07917545a256-kube-api-access-r9rdc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.902177 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.907151 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-logs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.907915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a0474b6-bb48-4c95-8735-07917545a256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.908301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.909135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.911638 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.924995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9rdc\" (UniqueName: \"kubernetes.io/projected/8a0474b6-bb48-4c95-8735-07917545a256-kube-api-access-r9rdc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.935915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0474b6-bb48-4c95-8735-07917545a256-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:44 crc kubenswrapper[4985]: I0127 09:13:44.946908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a0474b6-bb48-4c95-8735-07917545a256\") " pod="openstack/glance-default-external-api-0" Jan 27 09:13:45 crc kubenswrapper[4985]: I0127 09:13:45.010712 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:13:45 crc kubenswrapper[4985]: I0127 09:13:45.330423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerStarted","Data":"3970c8168ad6bf47a36812e72d1405175472ede5366bd092fe39b596c588fee6"} Jan 27 09:13:45 crc kubenswrapper[4985]: I0127 09:13:45.331691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6682197b-86c6-4c58-9fa3-4b08340e9464","Type":"ContainerStarted","Data":"e4e731ff1561612af8cff102659b24474ad6d72665f8b38e5e5c917a8338aea2"} Jan 27 09:13:45 crc kubenswrapper[4985]: I0127 09:13:45.746224 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.348980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6682197b-86c6-4c58-9fa3-4b08340e9464","Type":"ContainerStarted","Data":"392ceebe086d3ee53f9cf89d2b9274982d7f66c04a67a1e208dac86354db3a90"} Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.349621 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6682197b-86c6-4c58-9fa3-4b08340e9464","Type":"ContainerStarted","Data":"51c85b29413093eb89f81bd36ac751eeb1bd843c99c7b7b8128259374e590da9"} Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.353396 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a0474b6-bb48-4c95-8735-07917545a256","Type":"ContainerStarted","Data":"d64b4c53dff24979da1f63c20be76a0fa8fa7859cbd16cc7a37fd8879a4e6a83"} Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.362727 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerStarted","Data":"5262b7532f7f7248f3cc3dc4717dbf129aecfe6fd76b06248677606be96fc886"} Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.362787 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerStarted","Data":"4001b467272b0e0f52e227d4417d32dc5158f3d6f5792dd3c43e6099b5f5ae21"} Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.393266 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.393243407 podStartE2EDuration="3.393243407s" podCreationTimestamp="2026-01-27 09:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:46.372244592 +0000 UTC m=+1210.663339433" watchObservedRunningTime="2026-01-27 09:13:46.393243407 +0000 UTC m=+1210.684338248" Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.476517 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3262de3-5394-485b-a572-14d824be6c29" path="/var/lib/kubelet/pods/c3262de3-5394-485b-a572-14d824be6c29/volumes" Jan 27 09:13:46 crc kubenswrapper[4985]: I0127 09:13:46.667104 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.268456 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7957b79d47-2xvr4"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.270941 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.306169 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7957b79d47-2xvr4"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.313579 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7bbcb69c84-cb6bz"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.320181 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.340629 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bbcb69c84-cb6bz"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.392816 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73388136-0e40-4439-95bb-52ef16391821-logs\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.392873 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-combined-ca-bundle\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.392897 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data-custom\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.392935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.392969 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn7j\" (UniqueName: \"kubernetes.io/projected/73388136-0e40-4439-95bb-52ef16391821-kube-api-access-4fn7j\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.393006 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.393024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjrb\" (UniqueName: \"kubernetes.io/projected/75790805-0e26-4dc9-9970-dd8b6a332ce7-kube-api-access-fwjrb\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.393046 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75790805-0e26-4dc9-9970-dd8b6a332ce7-logs\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.393070 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-combined-ca-bundle\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.393128 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data-custom\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.415898 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f4789ffd8-wmhds"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.417760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a0474b6-bb48-4c95-8735-07917545a256","Type":"ContainerStarted","Data":"e538d3a6bf5efa7b34dec0533888155eb0b3dd71703fc57a905ec1f83e0d425e"} Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.417974 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.424315 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f4789ffd8-wmhds"] Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.432572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerStarted","Data":"f4c019be5d46425e7bb45e0c7881b328d7e46b0d6b7121cff384eb85d1b59070"} Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498539 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjrb\" (UniqueName: \"kubernetes.io/projected/75790805-0e26-4dc9-9970-dd8b6a332ce7-kube-api-access-fwjrb\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75790805-0e26-4dc9-9970-dd8b6a332ce7-logs\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-combined-ca-bundle\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data-custom\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73388136-0e40-4439-95bb-52ef16391821-logs\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-combined-ca-bundle\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.498997 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data-custom\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.499055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.499140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fn7j\" (UniqueName: \"kubernetes.io/projected/73388136-0e40-4439-95bb-52ef16391821-kube-api-access-4fn7j\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.507254 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75790805-0e26-4dc9-9970-dd8b6a332ce7-logs\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.509981 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73388136-0e40-4439-95bb-52ef16391821-logs\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.532585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data-custom\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.540328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data-custom\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.541449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-config-data\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.542147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-config-data\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.543380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73388136-0e40-4439-95bb-52ef16391821-combined-ca-bundle\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.549302 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75790805-0e26-4dc9-9970-dd8b6a332ce7-combined-ca-bundle\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.563266 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fn7j\" (UniqueName: \"kubernetes.io/projected/73388136-0e40-4439-95bb-52ef16391821-kube-api-access-4fn7j\") pod \"barbican-worker-7957b79d47-2xvr4\" (UID: \"73388136-0e40-4439-95bb-52ef16391821\") " pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.581851 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjrb\" (UniqueName: \"kubernetes.io/projected/75790805-0e26-4dc9-9970-dd8b6a332ce7-kube-api-access-fwjrb\") pod \"barbican-keystone-listener-7bbcb69c84-cb6bz\" (UID: \"75790805-0e26-4dc9-9970-dd8b6a332ce7\") " pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5892977-6d3e-49f7-91fe-fcdff05221b0-logs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7t5r\" (UniqueName: \"kubernetes.io/projected/d5892977-6d3e-49f7-91fe-fcdff05221b0-kube-api-access-g7t5r\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-public-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601551 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-combined-ca-bundle\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data-custom\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.601761 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-internal-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.602791 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.656660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7957b79d47-2xvr4" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.680171 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.706942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-public-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.707751 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-combined-ca-bundle\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.707845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data-custom\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.707922 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-internal-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.708051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.708167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5892977-6d3e-49f7-91fe-fcdff05221b0-logs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.708335 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7t5r\" (UniqueName: \"kubernetes.io/projected/d5892977-6d3e-49f7-91fe-fcdff05221b0-kube-api-access-g7t5r\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.721651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5892977-6d3e-49f7-91fe-fcdff05221b0-logs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.722215 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data-custom\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.722327 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-combined-ca-bundle\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.724315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-internal-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.729135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-config-data\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.748009 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5892977-6d3e-49f7-91fe-fcdff05221b0-public-tls-certs\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.760636 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7t5r\" (UniqueName: \"kubernetes.io/projected/d5892977-6d3e-49f7-91fe-fcdff05221b0-kube-api-access-g7t5r\") pod \"barbican-api-f4789ffd8-wmhds\" (UID: \"d5892977-6d3e-49f7-91fe-fcdff05221b0\") " pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:47 crc kubenswrapper[4985]: I0127 09:13:47.812297 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:48 crc kubenswrapper[4985]: I0127 09:13:48.447178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a0474b6-bb48-4c95-8735-07917545a256","Type":"ContainerStarted","Data":"41daa5706efdb26bef45eb7579fb4af09ba361ffbd6248361ba46e65b45fc1b1"} Jan 27 09:13:48 crc kubenswrapper[4985]: I0127 09:13:48.484967 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.484938683 podStartE2EDuration="4.484938683s" podCreationTimestamp="2026-01-27 09:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:48.476380909 +0000 UTC m=+1212.767475760" watchObservedRunningTime="2026-01-27 09:13:48.484938683 +0000 UTC m=+1212.776033524" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.179345 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-68b69f7bc7-n2dvz"] Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.201753 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.204566 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68b69f7bc7-n2dvz"] Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.327310 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-internal-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.327678 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwklk\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-kube-api-access-vwklk\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.327852 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-combined-ca-bundle\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.327981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-public-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.328151 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-config-data\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.328334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-etc-swift\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.328480 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-log-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.328525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-run-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.429723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-config-data\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.429766 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-etc-swift\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.429913 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-log-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.429933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-run-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.431570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-run-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.431612 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-log-httpd\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.432395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-internal-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.432434 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwklk\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-kube-api-access-vwklk\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.432466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-public-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.432487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-combined-ca-bundle\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.442540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-config-data\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.447267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-public-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.447981 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-internal-tls-certs\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.450382 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-etc-swift\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.452069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-combined-ca-bundle\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.462840 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwklk\" (UniqueName: \"kubernetes.io/projected/5d8c24c8-a66b-4fbd-a9fa-96863c29880e-kube-api-access-vwklk\") pod \"swift-proxy-68b69f7bc7-n2dvz\" (UID: \"5d8c24c8-a66b-4fbd-a9fa-96863c29880e\") " pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:52 crc kubenswrapper[4985]: I0127 09:13:52.543538 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.804596 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7957b79d47-2xvr4"] Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.876939 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68b69f7bc7-n2dvz"] Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.893776 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.893817 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.950999 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f4789ffd8-wmhds"] Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.956587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.965707 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:53 crc kubenswrapper[4985]: I0127 09:13:53.972046 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bbcb69c84-cb6bz"] Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.552466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" event={"ID":"5d8c24c8-a66b-4fbd-a9fa-96863c29880e","Type":"ContainerStarted","Data":"30bf9791eaf8dfd185179312c8c2d7bfa4a11f4614db74f377f2a8dc7c31eced"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.553167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" event={"ID":"5d8c24c8-a66b-4fbd-a9fa-96863c29880e","Type":"ContainerStarted","Data":"ae8a93179d3b5aa37630ba58944ca5d77bea4db96a6a063e5a4aaf763fc500a1"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.553278 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" event={"ID":"5d8c24c8-a66b-4fbd-a9fa-96863c29880e","Type":"ContainerStarted","Data":"51e0558b8ef994b6938242436c1541e14318ec6be03c4f9f2ce6a64ee2db1516"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.553716 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" event={"ID":"75790805-0e26-4dc9-9970-dd8b6a332ce7","Type":"ContainerStarted","Data":"3eca4502275b710d8814f95a628ecb09fbe70d72937d77c93b6492fe44646632"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.553820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" event={"ID":"75790805-0e26-4dc9-9970-dd8b6a332ce7","Type":"ContainerStarted","Data":"b994df104ceca2e8c975d05523fecf73c6a1b4a4fd01b9702ef2e5b300d74c8a"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.554227 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.554249 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.555349 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" event={"ID":"488cf0d5-caf5-4a7c-966c-233b758c0dcd","Type":"ContainerStarted","Data":"799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.559137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4789ffd8-wmhds" event={"ID":"d5892977-6d3e-49f7-91fe-fcdff05221b0","Type":"ContainerStarted","Data":"4818dda9471f4c2a3fa941cce41c8068fecb65d478b064ee1b63335cb09e89e1"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.559181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4789ffd8-wmhds" event={"ID":"d5892977-6d3e-49f7-91fe-fcdff05221b0","Type":"ContainerStarted","Data":"baf439f6908bcfbc86c6f7ca7639d7ca271b2d59d3fcd38e104c7a9966c78d94"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerStarted","Data":"f6a0c799272a11c581bf4e05f43ad59f278d907f1deeea53991473ee84afb623"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572453 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-central-agent" containerID="cri-o://4001b467272b0e0f52e227d4417d32dc5158f3d6f5792dd3c43e6099b5f5ae21" gracePeriod=30 Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572608 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572700 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-notification-agent" containerID="cri-o://5262b7532f7f7248f3cc3dc4717dbf129aecfe6fd76b06248677606be96fc886" gracePeriod=30 Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572726 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="proxy-httpd" containerID="cri-o://f6a0c799272a11c581bf4e05f43ad59f278d907f1deeea53991473ee84afb623" gracePeriod=30 Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.572854 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="sg-core" containerID="cri-o://f4c019be5d46425e7bb45e0c7881b328d7e46b0d6b7121cff384eb85d1b59070" gracePeriod=30 Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.584169 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" podStartSLOduration=2.584151108 podStartE2EDuration="2.584151108s" podCreationTimestamp="2026-01-27 09:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:54.574637097 +0000 UTC m=+1218.865731958" watchObservedRunningTime="2026-01-27 09:13:54.584151108 +0000 UTC m=+1218.875245949" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.606980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7957b79d47-2xvr4" event={"ID":"73388136-0e40-4439-95bb-52ef16391821","Type":"ContainerStarted","Data":"c662cd8ed6b7e388dfb3ba80692b3df62fa5e0676ef29f4d78f33a9763c00260"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.607017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7957b79d47-2xvr4" event={"ID":"73388136-0e40-4439-95bb-52ef16391821","Type":"ContainerStarted","Data":"21af7c6cffef9ded7913025c64e39226900ee1c67a6041ad1a9d61af7904fc7e"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.607026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7957b79d47-2xvr4" event={"ID":"73388136-0e40-4439-95bb-52ef16391821","Type":"ContainerStarted","Data":"c8864b50367c13b7a4438a3cba73f78af733bb194c581c2fb7410b66f9cd2201"} Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.607041 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.607147 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.618788 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" podStartSLOduration=1.741386725 podStartE2EDuration="16.618766577s" podCreationTimestamp="2026-01-27 09:13:38 +0000 UTC" firstStartedPulling="2026-01-27 09:13:39.016502775 +0000 UTC m=+1203.307597616" lastFinishedPulling="2026-01-27 09:13:53.893882627 +0000 UTC m=+1218.184977468" observedRunningTime="2026-01-27 09:13:54.602148561 +0000 UTC m=+1218.893243402" watchObservedRunningTime="2026-01-27 09:13:54.618766577 +0000 UTC m=+1218.909861428" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.631487 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.281289681 podStartE2EDuration="11.631469346s" podCreationTimestamp="2026-01-27 09:13:43 +0000 UTC" firstStartedPulling="2026-01-27 09:13:44.60231588 +0000 UTC m=+1208.893410721" lastFinishedPulling="2026-01-27 09:13:53.952495545 +0000 UTC m=+1218.243590386" observedRunningTime="2026-01-27 09:13:54.624072272 +0000 UTC m=+1218.915167113" watchObservedRunningTime="2026-01-27 09:13:54.631469346 +0000 UTC m=+1218.922564187" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.648145 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7957b79d47-2xvr4" podStartSLOduration=7.648127283 podStartE2EDuration="7.648127283s" podCreationTimestamp="2026-01-27 09:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:54.64585944 +0000 UTC m=+1218.936954281" watchObservedRunningTime="2026-01-27 09:13:54.648127283 +0000 UTC m=+1218.939222124" Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.687470 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.687747 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5bf9c57989-7kxf6" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker-log" containerID="cri-o://ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12" gracePeriod=30 Jan 27 09:13:54 crc kubenswrapper[4985]: I0127 09:13:54.687857 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5bf9c57989-7kxf6" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker" containerID="cri-o://856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b" gracePeriod=30 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.013039 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.013384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.092959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.107007 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.617024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4789ffd8-wmhds" event={"ID":"d5892977-6d3e-49f7-91fe-fcdff05221b0","Type":"ContainerStarted","Data":"6c4911bfa93b9410be91134099997465a43f9525cfee5752c8a8d859c325aa99"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.617438 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.617457 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.620882 4985 generic.go:334] "Generic (PLEG): container finished" podID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerID="f6a0c799272a11c581bf4e05f43ad59f278d907f1deeea53991473ee84afb623" exitCode=0 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.620918 4985 generic.go:334] "Generic (PLEG): container finished" podID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerID="f4c019be5d46425e7bb45e0c7881b328d7e46b0d6b7121cff384eb85d1b59070" exitCode=2 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.620930 4985 generic.go:334] "Generic (PLEG): container finished" podID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerID="5262b7532f7f7248f3cc3dc4717dbf129aecfe6fd76b06248677606be96fc886" exitCode=0 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.620949 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerDied","Data":"f6a0c799272a11c581bf4e05f43ad59f278d907f1deeea53991473ee84afb623"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.620994 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerDied","Data":"f4c019be5d46425e7bb45e0c7881b328d7e46b0d6b7121cff384eb85d1b59070"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.621006 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerDied","Data":"5262b7532f7f7248f3cc3dc4717dbf129aecfe6fd76b06248677606be96fc886"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.623615 4985 generic.go:334] "Generic (PLEG): container finished" podID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerID="ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12" exitCode=143 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.623757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerDied","Data":"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.626183 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" event={"ID":"75790805-0e26-4dc9-9970-dd8b6a332ce7","Type":"ContainerStarted","Data":"d29ba2494130a485ea61bb03cca41907efe494bbb8850882f0bc1df613574d53"} Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.627320 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.627345 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.643190 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f4789ffd8-wmhds" podStartSLOduration=8.643170552 podStartE2EDuration="8.643170552s" podCreationTimestamp="2026-01-27 09:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:55.63835218 +0000 UTC m=+1219.929447021" watchObservedRunningTime="2026-01-27 09:13:55.643170552 +0000 UTC m=+1219.934265393" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.666814 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7bbcb69c84-cb6bz" podStartSLOduration=8.66678329 podStartE2EDuration="8.66678329s" podCreationTimestamp="2026-01-27 09:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:55.663194261 +0000 UTC m=+1219.954289102" watchObservedRunningTime="2026-01-27 09:13:55.66678329 +0000 UTC m=+1219.957878131" Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.713841 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.714186 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener-log" containerID="cri-o://9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253" gracePeriod=30 Jan 27 09:13:55 crc kubenswrapper[4985]: I0127 09:13:55.714931 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener" containerID="cri-o://6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e" gracePeriod=30 Jan 27 09:13:56 crc kubenswrapper[4985]: I0127 09:13:56.636843 4985 generic.go:334] "Generic (PLEG): container finished" podID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerID="9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253" exitCode=143 Jan 27 09:13:56 crc kubenswrapper[4985]: I0127 09:13:56.637014 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerDied","Data":"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253"} Jan 27 09:13:56 crc kubenswrapper[4985]: I0127 09:13:56.637037 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:13:56 crc kubenswrapper[4985]: I0127 09:13:56.637109 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.341687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.441437 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.493398 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle\") pod \"7c55baf3-752e-40a7-acdd-d26df561bf9c\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.493504 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xtvn\" (UniqueName: \"kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn\") pod \"7c55baf3-752e-40a7-acdd-d26df561bf9c\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.493689 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom\") pod \"7c55baf3-752e-40a7-acdd-d26df561bf9c\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.493762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data\") pod \"7c55baf3-752e-40a7-acdd-d26df561bf9c\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.493787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs\") pod \"7c55baf3-752e-40a7-acdd-d26df561bf9c\" (UID: \"7c55baf3-752e-40a7-acdd-d26df561bf9c\") " Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.495139 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs" (OuterVolumeSpecName: "logs") pod "7c55baf3-752e-40a7-acdd-d26df561bf9c" (UID: "7c55baf3-752e-40a7-acdd-d26df561bf9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.502100 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn" (OuterVolumeSpecName: "kube-api-access-7xtvn") pod "7c55baf3-752e-40a7-acdd-d26df561bf9c" (UID: "7c55baf3-752e-40a7-acdd-d26df561bf9c"). InnerVolumeSpecName "kube-api-access-7xtvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.502667 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c55baf3-752e-40a7-acdd-d26df561bf9c" (UID: "7c55baf3-752e-40a7-acdd-d26df561bf9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.526367 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c55baf3-752e-40a7-acdd-d26df561bf9c" (UID: "7c55baf3-752e-40a7-acdd-d26df561bf9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.579289 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data" (OuterVolumeSpecName: "config-data") pod "7c55baf3-752e-40a7-acdd-d26df561bf9c" (UID: "7c55baf3-752e-40a7-acdd-d26df561bf9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.597458 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xtvn\" (UniqueName: \"kubernetes.io/projected/7c55baf3-752e-40a7-acdd-d26df561bf9c-kube-api-access-7xtvn\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.597538 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.597554 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.597567 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c55baf3-752e-40a7-acdd-d26df561bf9c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.597582 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c55baf3-752e-40a7-acdd-d26df561bf9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.657372 4985 generic.go:334] "Generic (PLEG): container finished" podID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerID="6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e" exitCode=0 Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.657471 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.658358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.659732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerDied","Data":"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e"} Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.659827 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b565c4-5dhmn" event={"ID":"7c55baf3-752e-40a7-acdd-d26df561bf9c","Type":"ContainerDied","Data":"982ff01c68f332ecc4d1ae9e811b55f90dc92f635e1ae47abe6119e93f41e269"} Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.659858 4985 scope.go:117] "RemoveContainer" containerID="6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.764921 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.780084 4985 scope.go:117] "RemoveContainer" containerID="9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.787188 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7b565c4-5dhmn"] Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.789796 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.830296 4985 scope.go:117] "RemoveContainer" containerID="6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e" Jan 27 09:13:57 crc kubenswrapper[4985]: E0127 09:13:57.851240 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e\": container with ID starting with 6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e not found: ID does not exist" containerID="6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.851293 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e"} err="failed to get container status \"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e\": rpc error: code = NotFound desc = could not find container \"6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e\": container with ID starting with 6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e not found: ID does not exist" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.851322 4985 scope.go:117] "RemoveContainer" containerID="9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253" Jan 27 09:13:57 crc kubenswrapper[4985]: E0127 09:13:57.853397 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253\": container with ID starting with 9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253 not found: ID does not exist" containerID="9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253" Jan 27 09:13:57 crc kubenswrapper[4985]: I0127 09:13:57.853461 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253"} err="failed to get container status \"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253\": rpc error: code = NotFound desc = could not find container \"9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253\": container with ID starting with 9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253 not found: ID does not exist" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.291793 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.323218 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data\") pod \"fd7d78ce-005f-4c67-9204-5030a19420e2\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.323263 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs\") pod \"fd7d78ce-005f-4c67-9204-5030a19420e2\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.323342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom\") pod \"fd7d78ce-005f-4c67-9204-5030a19420e2\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.323391 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle\") pod \"fd7d78ce-005f-4c67-9204-5030a19420e2\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.323425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnppx\" (UniqueName: \"kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx\") pod \"fd7d78ce-005f-4c67-9204-5030a19420e2\" (UID: \"fd7d78ce-005f-4c67-9204-5030a19420e2\") " Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.340461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs" (OuterVolumeSpecName: "logs") pod "fd7d78ce-005f-4c67-9204-5030a19420e2" (UID: "fd7d78ce-005f-4c67-9204-5030a19420e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.344522 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd7d78ce-005f-4c67-9204-5030a19420e2" (UID: "fd7d78ce-005f-4c67-9204-5030a19420e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.347888 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx" (OuterVolumeSpecName: "kube-api-access-mnppx") pod "fd7d78ce-005f-4c67-9204-5030a19420e2" (UID: "fd7d78ce-005f-4c67-9204-5030a19420e2"). InnerVolumeSpecName "kube-api-access-mnppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.385799 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd7d78ce-005f-4c67-9204-5030a19420e2" (UID: "fd7d78ce-005f-4c67-9204-5030a19420e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.426766 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.426805 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.426814 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnppx\" (UniqueName: \"kubernetes.io/projected/fd7d78ce-005f-4c67-9204-5030a19420e2-kube-api-access-mnppx\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.426832 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7d78ce-005f-4c67-9204-5030a19420e2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.427507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.427629 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.464460 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data" (OuterVolumeSpecName: "config-data") pod "fd7d78ce-005f-4c67-9204-5030a19420e2" (UID: "fd7d78ce-005f-4c67-9204-5030a19420e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.494187 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" path="/var/lib/kubelet/pods/7c55baf3-752e-40a7-acdd-d26df561bf9c/volumes" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.529469 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7d78ce-005f-4c67-9204-5030a19420e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.533882 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.674184 4985 generic.go:334] "Generic (PLEG): container finished" podID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerID="856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b" exitCode=0 Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.675785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bf9c57989-7kxf6" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.676499 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerDied","Data":"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b"} Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.676569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bf9c57989-7kxf6" event={"ID":"fd7d78ce-005f-4c67-9204-5030a19420e2","Type":"ContainerDied","Data":"9cde7cfa6efaa8703f76824b483ce813cbebf258f3881c79c3ad1d2ed4098215"} Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.676601 4985 scope.go:117] "RemoveContainer" containerID="856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.706662 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.716397 4985 scope.go:117] "RemoveContainer" containerID="ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.722759 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5bf9c57989-7kxf6"] Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.748919 4985 scope.go:117] "RemoveContainer" containerID="856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b" Jan 27 09:13:58 crc kubenswrapper[4985]: E0127 09:13:58.749692 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b\": container with ID starting with 856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b not found: ID does not exist" containerID="856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.749746 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b"} err="failed to get container status \"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b\": rpc error: code = NotFound desc = could not find container \"856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b\": container with ID starting with 856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b not found: ID does not exist" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.749791 4985 scope.go:117] "RemoveContainer" containerID="ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12" Jan 27 09:13:58 crc kubenswrapper[4985]: E0127 09:13:58.750307 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12\": container with ID starting with ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12 not found: ID does not exist" containerID="ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12" Jan 27 09:13:58 crc kubenswrapper[4985]: I0127 09:13:58.750399 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12"} err="failed to get container status \"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12\": rpc error: code = NotFound desc = could not find container \"ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12\": container with ID starting with ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12 not found: ID does not exist" Jan 27 09:13:59 crc kubenswrapper[4985]: E0127 09:13:59.564046 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6b21575c74f4b35c283de1687452c475017b594fa34c069c82fa474c6797fe75/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6b21575c74f4b35c283de1687452c475017b594fa34c069c82fa474c6797fe75/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_barbican-worker-5bf9c57989-7kxf6_fd7d78ce-005f-4c67-9204-5030a19420e2/barbican-worker-log/0.log" to get inode usage: stat /var/log/pods/openstack_barbican-worker-5bf9c57989-7kxf6_fd7d78ce-005f-4c67-9204-5030a19420e2/barbican-worker-log/0.log: no such file or directory Jan 27 09:13:59 crc kubenswrapper[4985]: E0127 09:13:59.567299 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b57e295b56200ce9e04ab0da71a923748162326a70be8647bbd5e60317f55a7f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b57e295b56200ce9e04ab0da71a923748162326a70be8647bbd5e60317f55a7f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_barbican-keystone-listener-7b565c4-5dhmn_7c55baf3-752e-40a7-acdd-d26df561bf9c/barbican-keystone-listener-log/0.log" to get inode usage: stat /var/log/pods/openstack_barbican-keystone-listener-7b565c4-5dhmn_7c55baf3-752e-40a7-acdd-d26df561bf9c/barbican-keystone-listener-log/0.log: no such file or directory Jan 27 09:13:59 crc kubenswrapper[4985]: I0127 09:13:59.694785 4985 generic.go:334] "Generic (PLEG): container finished" podID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerID="4001b467272b0e0f52e227d4417d32dc5158f3d6f5792dd3c43e6099b5f5ae21" exitCode=0 Jan 27 09:13:59 crc kubenswrapper[4985]: I0127 09:13:59.694869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerDied","Data":"4001b467272b0e0f52e227d4417d32dc5158f3d6f5792dd3c43e6099b5f5ae21"} Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.138644 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.165712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.165779 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.165876 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.165939 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.165966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.166006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.166037 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzq8\" (UniqueName: \"kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8\") pod \"48bcb21b-2fe6-4822-8536-e3575d036d90\" (UID: \"48bcb21b-2fe6-4822-8536-e3575d036d90\") " Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.166259 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.166338 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.166864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.176815 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8" (OuterVolumeSpecName: "kube-api-access-xkzq8") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "kube-api-access-xkzq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.179783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts" (OuterVolumeSpecName: "scripts") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.223495 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.269933 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzq8\" (UniqueName: \"kubernetes.io/projected/48bcb21b-2fe6-4822-8536-e3575d036d90-kube-api-access-xkzq8\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.269991 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.270010 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.270024 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48bcb21b-2fe6-4822-8536-e3575d036d90-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.302586 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.306101 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data" (OuterVolumeSpecName: "config-data") pod "48bcb21b-2fe6-4822-8536-e3575d036d90" (UID: "48bcb21b-2fe6-4822-8536-e3575d036d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.372924 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.372968 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bcb21b-2fe6-4822-8536-e3575d036d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.375139 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d40357bfa3161a5014af47010fd1a738f1c98684f22807918c31c37f2594ecd2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d40357bfa3161a5014af47010fd1a738f1c98684f22807918c31c37f2594ecd2/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_barbican-keystone-listener-7b565c4-5dhmn_7c55baf3-752e-40a7-acdd-d26df561bf9c/barbican-keystone-listener/0.log" to get inode usage: stat /var/log/pods/openstack_barbican-keystone-listener-7b565c4-5dhmn_7c55baf3-752e-40a7-acdd-d26df561bf9c/barbican-keystone-listener/0.log: no such file or directory Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.394355 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e78d522bb349d33671e3cc15c59f55b63e32df78baa9250147392dc5ea1a15fa/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e78d522bb349d33671e3cc15c59f55b63e32df78baa9250147392dc5ea1a15fa/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_barbican-worker-5bf9c57989-7kxf6_fd7d78ce-005f-4c67-9204-5030a19420e2/barbican-worker/0.log" to get inode usage: stat /var/log/pods/openstack_barbican-worker-5bf9c57989-7kxf6_fd7d78ce-005f-4c67-9204-5030a19420e2/barbican-worker/0.log: no such file or directory Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.464025 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" path="/var/lib/kubelet/pods/fd7d78ce-005f-4c67-9204-5030a19420e2/volumes" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.709577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48bcb21b-2fe6-4822-8536-e3575d036d90","Type":"ContainerDied","Data":"3970c8168ad6bf47a36812e72d1405175472ede5366bd092fe39b596c588fee6"} Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.710545 4985 scope.go:117] "RemoveContainer" containerID="f6a0c799272a11c581bf4e05f43ad59f278d907f1deeea53991473ee84afb623" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.710891 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.744040 4985 scope.go:117] "RemoveContainer" containerID="f4c019be5d46425e7bb45e0c7881b328d7e46b0d6b7121cff384eb85d1b59070" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.786388 4985 scope.go:117] "RemoveContainer" containerID="5262b7532f7f7248f3cc3dc4717dbf129aecfe6fd76b06248677606be96fc886" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.786579 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.809143 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830299 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830752 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-notification-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830771 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-notification-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830781 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-central-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830788 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-central-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830811 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="sg-core" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830817 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="sg-core" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830828 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="proxy-httpd" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830834 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="proxy-httpd" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830846 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener-log" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830852 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener-log" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830865 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker-log" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830872 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker-log" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830881 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830886 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener" Jan 27 09:14:00 crc kubenswrapper[4985]: E0127 09:14:00.830895 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.830900 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831056 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker-log" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831066 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7d78ce-005f-4c67-9204-5030a19420e2" containerName="barbican-worker" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831075 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener-log" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831089 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="proxy-httpd" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831099 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-notification-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831108 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c55baf3-752e-40a7-acdd-d26df561bf9c" containerName="barbican-keystone-listener" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831118 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="ceilometer-central-agent" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.831131 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" containerName="sg-core" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.832820 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.836683 4985 scope.go:117] "RemoveContainer" containerID="4001b467272b0e0f52e227d4417d32dc5158f3d6f5792dd3c43e6099b5f5ae21" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.837155 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.839873 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.854839 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.895796 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49b8\" (UniqueName: \"kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996500 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996613 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996644 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.996854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49b8\" (UniqueName: \"kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.999063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:00 crc kubenswrapper[4985]: I0127 09:14:00.999141 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.003796 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.004502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.004860 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.006663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.020575 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49b8\" (UniqueName: \"kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8\") pod \"ceilometer-0\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.184145 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:01 crc kubenswrapper[4985]: I0127 09:14:01.705334 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.462994 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bcb21b-2fe6-4822-8536-e3575d036d90" path="/var/lib/kubelet/pods/48bcb21b-2fe6-4822-8536-e3575d036d90/volumes" Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.554846 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.559692 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68b69f7bc7-n2dvz" Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.687126 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.687591 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-689489568f-6ggjw" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-httpd" containerID="cri-o://342718c9c057fc90aa6c429aff3b351bd0d359aa618f143a3adf6e3f1842c3f9" gracePeriod=30 Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.687922 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-689489568f-6ggjw" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-server" containerID="cri-o://a67e828b20cd21bdb6367204032201ce9c5a8ade30c74546a82181c8f88b063f" gracePeriod=30 Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.763063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerStarted","Data":"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a"} Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.763107 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerStarted","Data":"bff79c0552c944d46f48f254532bb01f0f90751b03fed3e8828cdf836253af0e"} Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.889692 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-689489568f-6ggjw" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 27 09:14:02 crc kubenswrapper[4985]: I0127 09:14:02.889736 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-689489568f-6ggjw" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.232671 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.234176 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250566 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250842 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.250943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvdn\" (UniqueName: \"kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.255352 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.352952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353014 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353031 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353134 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.353176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdvdn\" (UniqueName: \"kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.354912 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.359232 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.363304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.363376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.363828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.364113 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.379288 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdvdn\" (UniqueName: \"kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn\") pod \"placement-98b5966d-qnrcr\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.421148 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6798f6b777-jp82x" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.181:9696/\": dial tcp 10.217.0.181:9696: connect: connection refused" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.609350 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.780307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerStarted","Data":"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa"} Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.784812 4985 generic.go:334] "Generic (PLEG): container finished" podID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerID="a67e828b20cd21bdb6367204032201ce9c5a8ade30c74546a82181c8f88b063f" exitCode=0 Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.785143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerDied","Data":"a67e828b20cd21bdb6367204032201ce9c5a8ade30c74546a82181c8f88b063f"} Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.785244 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerDied","Data":"342718c9c057fc90aa6c429aff3b351bd0d359aa618f143a3adf6e3f1842c3f9"} Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.785184 4985 generic.go:334] "Generic (PLEG): container finished" podID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerID="342718c9c057fc90aa6c429aff3b351bd0d359aa618f143a3adf6e3f1842c3f9" exitCode=0 Jan 27 09:14:03 crc kubenswrapper[4985]: I0127 09:14:03.885534 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.104684 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.248057 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283125 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283224 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283254 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283388 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283419 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6czct\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283448 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.283494 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle\") pod \"3193865d-81a4-4cb6-baee-7f44246f4caa\" (UID: \"3193865d-81a4-4cb6-baee-7f44246f4caa\") " Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.285157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.290797 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.291145 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.297702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct" (OuterVolumeSpecName: "kube-api-access-6czct") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "kube-api-access-6czct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.385137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.402432 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.402486 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.402498 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.402526 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6czct\" (UniqueName: \"kubernetes.io/projected/3193865d-81a4-4cb6-baee-7f44246f4caa-kube-api-access-6czct\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.402541 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3193865d-81a4-4cb6-baee-7f44246f4caa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.411879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.419319 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.449669 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data" (OuterVolumeSpecName: "config-data") pod "3193865d-81a4-4cb6-baee-7f44246f4caa" (UID: "3193865d-81a4-4cb6-baee-7f44246f4caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.505060 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.505096 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.505106 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3193865d-81a4-4cb6-baee-7f44246f4caa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.609152 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.645558 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84f67698b-shkcs"] Jan 27 09:14:04 crc kubenswrapper[4985]: E0127 09:14:04.646908 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-httpd" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.647027 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-httpd" Jan 27 09:14:04 crc kubenswrapper[4985]: E0127 09:14:04.647100 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-server" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.648585 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-server" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.648953 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-httpd" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.649045 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" containerName="proxy-server" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.650347 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.660322 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84f67698b-shkcs"] Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708708 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-combined-ca-bundle\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708758 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-888rk\" (UniqueName: \"kubernetes.io/projected/09da2957-d13e-44db-b153-3fcbbbfeaad8-kube-api-access-888rk\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708797 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-public-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-internal-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708864 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-config-data\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708941 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-scripts\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.708982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09da2957-d13e-44db-b153-3fcbbbfeaad8-logs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.795445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerStarted","Data":"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c"} Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.798715 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-689489568f-6ggjw" event={"ID":"3193865d-81a4-4cb6-baee-7f44246f4caa","Type":"ContainerDied","Data":"64fcec0b6ec066d51fdddff72452a8eb5d2f8cd7bc267c5d7d3eb4e111e2c265"} Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.798752 4985 scope.go:117] "RemoveContainer" containerID="a67e828b20cd21bdb6367204032201ce9c5a8ade30c74546a82181c8f88b063f" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.798753 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-689489568f-6ggjw" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.803713 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerStarted","Data":"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5"} Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.803752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerStarted","Data":"f86a187d2cc0677e8af1ffe448a59b7ec47568ef3798f542e9a5cb445e0feabb"} Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.810958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-public-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.811002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-internal-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.811057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-config-data\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.811083 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-scripts\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.811126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09da2957-d13e-44db-b153-3fcbbbfeaad8-logs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.812359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-combined-ca-bundle\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.812385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-888rk\" (UniqueName: \"kubernetes.io/projected/09da2957-d13e-44db-b153-3fcbbbfeaad8-kube-api-access-888rk\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.813026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09da2957-d13e-44db-b153-3fcbbbfeaad8-logs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.820154 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-public-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.821285 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-combined-ca-bundle\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.826934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-scripts\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.827405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-internal-tls-certs\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.836948 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09da2957-d13e-44db-b153-3fcbbbfeaad8-config-data\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.839831 4985 scope.go:117] "RemoveContainer" containerID="342718c9c057fc90aa6c429aff3b351bd0d359aa618f143a3adf6e3f1842c3f9" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.842440 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.848217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-888rk\" (UniqueName: \"kubernetes.io/projected/09da2957-d13e-44db-b153-3fcbbbfeaad8-kube-api-access-888rk\") pod \"placement-84f67698b-shkcs\" (UID: \"09da2957-d13e-44db-b153-3fcbbbfeaad8\") " pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:04 crc kubenswrapper[4985]: I0127 09:14:04.855554 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-689489568f-6ggjw"] Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.068896 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.085075 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.114419 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8569774db7-5qrp6" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.210954 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.211390 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7645cd55cc-6b9mt" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-httpd" containerID="cri-o://8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.211418 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7645cd55cc-6b9mt" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-api" containerID="cri-o://a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.492351 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f4789ffd8-wmhds" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.602236 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.602569 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d4789b966-88v9q" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api-log" containerID="cri-o://98b4020986d2865733874c979c1de64770b25c9e9ac453f22ec06ab8b8c43c9e" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.603505 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d4789b966-88v9q" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api" containerID="cri-o://856ba4aeedb6e76a2132db42cdbb6a15e54145f06dd72ef137282857e68f28d3" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.746803 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84f67698b-shkcs"] Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.871990 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-98b5966d-qnrcr" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-log" containerID="cri-o://f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.872330 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-98b5966d-qnrcr" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-api" containerID="cri-o://f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.872441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerStarted","Data":"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716"} Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.872474 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.872501 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.883606 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.883646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84f67698b-shkcs" event={"ID":"09da2957-d13e-44db-b153-3fcbbbfeaad8","Type":"ContainerStarted","Data":"1f246dd449b660fd799678459dae3b805a74f603b04961397b8f89e88d907869"} Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.883671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerStarted","Data":"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8"} Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.881372 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-notification-agent" containerID="cri-o://42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.881282 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-central-agent" containerID="cri-o://3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.881343 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="proxy-httpd" containerID="cri-o://018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.881361 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="sg-core" containerID="cri-o://0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c" gracePeriod=30 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.911338 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-98b5966d-qnrcr" podStartSLOduration=2.9113154420000003 podStartE2EDuration="2.911315442s" podCreationTimestamp="2026-01-27 09:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:05.900495176 +0000 UTC m=+1230.191590017" watchObservedRunningTime="2026-01-27 09:14:05.911315442 +0000 UTC m=+1230.202410303" Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.915895 4985 generic.go:334] "Generic (PLEG): container finished" podID="cce884fa-873f-4a46-9caa-b8f88720db78" containerID="98b4020986d2865733874c979c1de64770b25c9e9ac453f22ec06ab8b8c43c9e" exitCode=143 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.915985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerDied","Data":"98b4020986d2865733874c979c1de64770b25c9e9ac453f22ec06ab8b8c43c9e"} Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.944221 4985 generic.go:334] "Generic (PLEG): container finished" podID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerID="8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb" exitCode=0 Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.945081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerDied","Data":"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb"} Jan 27 09:14:05 crc kubenswrapper[4985]: I0127 09:14:05.958977 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.328001997 podStartE2EDuration="5.958951158s" podCreationTimestamp="2026-01-27 09:14:00 +0000 UTC" firstStartedPulling="2026-01-27 09:14:01.726488191 +0000 UTC m=+1226.017583032" lastFinishedPulling="2026-01-27 09:14:05.357437352 +0000 UTC m=+1229.648532193" observedRunningTime="2026-01-27 09:14:05.940079981 +0000 UTC m=+1230.231174812" watchObservedRunningTime="2026-01-27 09:14:05.958951158 +0000 UTC m=+1230.250045999" Jan 27 09:14:06 crc kubenswrapper[4985]: W0127 09:14:06.105923 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564992d4_5b88_4124_9cfa_8ee67386599d.slice/crio-1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752.scope WatchSource:0}: Error finding container 1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752: Status 404 returned error can't find the container with id 1bbef3545834d79a0d9a02f05b68baab4f4fffee4eeaf5de5a1efc94a8f7b752 Jan 27 09:14:06 crc kubenswrapper[4985]: W0127 09:14:06.114615 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3262de3_5394_485b_a572_14d824be6c29.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3262de3_5394_485b_a572_14d824be6c29.slice: no such file or directory Jan 27 09:14:06 crc kubenswrapper[4985]: W0127 09:14:06.119713 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48bcb21b_2fe6_4822_8536_e3575d036d90.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48bcb21b_2fe6_4822_8536_e3575d036d90.slice: no such file or directory Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.472139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3193865d-81a4-4cb6-baee-7f44246f4caa" path="/var/lib/kubelet/pods/3193865d-81a4-4cb6-baee-7f44246f4caa/volumes" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.519494 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568665 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568700 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568794 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.568937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdvdn\" (UniqueName: \"kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn\") pod \"d33ce189-793b-4b46-b8af-f37059e4eacf\" (UID: \"d33ce189-793b-4b46-b8af-f37059e4eacf\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.571461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs" (OuterVolumeSpecName: "logs") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.580273 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts" (OuterVolumeSpecName: "scripts") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.587856 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn" (OuterVolumeSpecName: "kube-api-access-cdvdn") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "kube-api-access-cdvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.674874 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d33ce189-793b-4b46-b8af-f37059e4eacf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.674909 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.674919 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdvdn\" (UniqueName: \"kubernetes.io/projected/d33ce189-793b-4b46-b8af-f37059e4eacf-kube-api-access-cdvdn\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.733611 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data" (OuterVolumeSpecName: "config-data") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.766530 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6798f6b777-jp82x_6aca7d18-9f0b-4c2e-aaef-39fb4d810616/neutron-api/0.log" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.766640 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776458 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kp58\" (UniqueName: \"kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776611 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.776894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle\") pod \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\" (UID: \"6aca7d18-9f0b-4c2e-aaef-39fb4d810616\") " Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.777543 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.777674 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.791739 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.800438 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58" (OuterVolumeSpecName: "kube-api-access-5kp58") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "kube-api-access-5kp58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.806313 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.858415 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d33ce189-793b-4b46-b8af-f37059e4eacf" (UID: "d33ce189-793b-4b46-b8af-f37059e4eacf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.877627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config" (OuterVolumeSpecName: "config") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878935 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878957 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878966 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878975 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878985 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kp58\" (UniqueName: \"kubernetes.io/projected/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-kube-api-access-5kp58\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.878993 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33ce189-793b-4b46-b8af-f37059e4eacf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.907543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.923581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.939616 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958410 4985 generic.go:334] "Generic (PLEG): container finished" podID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerID="f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" exitCode=0 Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958443 4985 generic.go:334] "Generic (PLEG): container finished" podID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerID="f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" exitCode=143 Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958487 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerDied","Data":"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958532 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerDied","Data":"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958542 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-98b5966d-qnrcr" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958559 4985 scope.go:117] "RemoveContainer" containerID="f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.958547 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-98b5966d-qnrcr" event={"ID":"d33ce189-793b-4b46-b8af-f37059e4eacf","Type":"ContainerDied","Data":"f86a187d2cc0677e8af1ffe448a59b7ec47568ef3798f542e9a5cb445e0feabb"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.960823 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84f67698b-shkcs" event={"ID":"09da2957-d13e-44db-b153-3fcbbbfeaad8","Type":"ContainerStarted","Data":"6a7f4095cf0285aab2ad96291e2c68070525aa9c10b5c80beb72054b5c692457"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.960857 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84f67698b-shkcs" event={"ID":"09da2957-d13e-44db-b153-3fcbbbfeaad8","Type":"ContainerStarted","Data":"f2dd50d32d00f5bf87db171fd5750d85d8576d8ed6994ad4d03f3f0db446cc84"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.960905 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.960928 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.972011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6aca7d18-9f0b-4c2e-aaef-39fb4d810616" (UID: "6aca7d18-9f0b-4c2e-aaef-39fb4d810616"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.974502 4985 generic.go:334] "Generic (PLEG): container finished" podID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerID="0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c" exitCode=2 Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.975125 4985 generic.go:334] "Generic (PLEG): container finished" podID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerID="42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa" exitCode=0 Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.975298 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerDied","Data":"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.975410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerDied","Data":"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.982443 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.982880 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.982982 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84f67698b-shkcs" podStartSLOduration=2.982970183 podStartE2EDuration="2.982970183s" podCreationTimestamp="2026-01-27 09:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:06.982229973 +0000 UTC m=+1231.273324834" watchObservedRunningTime="2026-01-27 09:14:06.982970183 +0000 UTC m=+1231.274065024" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.983010 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.983249 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca7d18-9f0b-4c2e-aaef-39fb4d810616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.993315 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6798f6b777-jp82x_6aca7d18-9f0b-4c2e-aaef-39fb4d810616/neutron-api/0.log" Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.993402 4985 generic.go:334] "Generic (PLEG): container finished" podID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerID="e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3" exitCode=137 Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.993461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerDied","Data":"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.993540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6798f6b777-jp82x" event={"ID":"6aca7d18-9f0b-4c2e-aaef-39fb4d810616","Type":"ContainerDied","Data":"398afcc8404ab8d60022d42429b67bbbd486def4f72390c4cd38172e262adfb8"} Jan 27 09:14:06 crc kubenswrapper[4985]: I0127 09:14:06.993654 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6798f6b777-jp82x" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.015083 4985 scope.go:117] "RemoveContainer" containerID="f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.019592 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.048575 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-98b5966d-qnrcr"] Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.060607 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.077786 4985 scope.go:117] "RemoveContainer" containerID="f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.078010 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6798f6b777-jp82x"] Jan 27 09:14:07 crc kubenswrapper[4985]: E0127 09:14:07.082688 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716\": container with ID starting with f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716 not found: ID does not exist" containerID="f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.082722 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716"} err="failed to get container status \"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716\": rpc error: code = NotFound desc = could not find container \"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716\": container with ID starting with f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716 not found: ID does not exist" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.082753 4985 scope.go:117] "RemoveContainer" containerID="f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" Jan 27 09:14:07 crc kubenswrapper[4985]: E0127 09:14:07.086935 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5\": container with ID starting with f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5 not found: ID does not exist" containerID="f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.086970 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5"} err="failed to get container status \"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5\": rpc error: code = NotFound desc = could not find container \"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5\": container with ID starting with f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5 not found: ID does not exist" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.086992 4985 scope.go:117] "RemoveContainer" containerID="f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.087374 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716"} err="failed to get container status \"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716\": rpc error: code = NotFound desc = could not find container \"f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716\": container with ID starting with f663dfd0a443534534530d87f63b3a11a42fd62834941e58ff8f00b46a9f8716 not found: ID does not exist" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.087389 4985 scope.go:117] "RemoveContainer" containerID="f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.087815 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5"} err="failed to get container status \"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5\": rpc error: code = NotFound desc = could not find container \"f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5\": container with ID starting with f389f7f9f6286389f7b17f945c2455c21aca22a02a9cc688c4f3619ef21921f5 not found: ID does not exist" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.087835 4985 scope.go:117] "RemoveContainer" containerID="70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.109370 4985 scope.go:117] "RemoveContainer" containerID="e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.143274 4985 scope.go:117] "RemoveContainer" containerID="70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453" Jan 27 09:14:07 crc kubenswrapper[4985]: E0127 09:14:07.144182 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453\": container with ID starting with 70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453 not found: ID does not exist" containerID="70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.144214 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453"} err="failed to get container status \"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453\": rpc error: code = NotFound desc = could not find container \"70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453\": container with ID starting with 70a9fdad84f1f00183990ebacd4f548beefe632a4bbf29b9306ce472a2cc0453 not found: ID does not exist" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.144235 4985 scope.go:117] "RemoveContainer" containerID="e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3" Jan 27 09:14:07 crc kubenswrapper[4985]: E0127 09:14:07.148071 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3\": container with ID starting with e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3 not found: ID does not exist" containerID="e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3" Jan 27 09:14:07 crc kubenswrapper[4985]: I0127 09:14:07.148100 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3"} err="failed to get container status \"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3\": rpc error: code = NotFound desc = could not find container \"e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3\": container with ID starting with e38232413fcc780697c66509c6733bffa1731aeb08ce4af28b66aeadf7fac0a3 not found: ID does not exist" Jan 27 09:14:08 crc kubenswrapper[4985]: I0127 09:14:08.462767 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" path="/var/lib/kubelet/pods/6aca7d18-9f0b-4c2e-aaef-39fb4d810616/volumes" Jan 27 09:14:08 crc kubenswrapper[4985]: I0127 09:14:08.463741 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" path="/var/lib/kubelet/pods/d33ce189-793b-4b46-b8af-f37059e4eacf/volumes" Jan 27 09:14:08 crc kubenswrapper[4985]: I0127 09:14:08.787920 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d4789b966-88v9q" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:51586->10.217.0.166:9311: read: connection reset by peer" Jan 27 09:14:08 crc kubenswrapper[4985]: I0127 09:14:08.787925 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d4789b966-88v9q" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:51582->10.217.0.166:9311: read: connection reset by peer" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.017575 4985 generic.go:334] "Generic (PLEG): container finished" podID="488cf0d5-caf5-4a7c-966c-233b758c0dcd" containerID="799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c" exitCode=0 Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.017612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" event={"ID":"488cf0d5-caf5-4a7c-966c-233b758c0dcd","Type":"ContainerDied","Data":"799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c"} Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.024334 4985 generic.go:334] "Generic (PLEG): container finished" podID="cce884fa-873f-4a46-9caa-b8f88720db78" containerID="856ba4aeedb6e76a2132db42cdbb6a15e54145f06dd72ef137282857e68f28d3" exitCode=0 Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.024384 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerDied","Data":"856ba4aeedb6e76a2132db42cdbb6a15e54145f06dd72ef137282857e68f28d3"} Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.278035 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.456987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.457681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.458182 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.458282 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.458455 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.458557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.458616 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgv9v\" (UniqueName: \"kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v\") pod \"cce884fa-873f-4a46-9caa-b8f88720db78\" (UID: \"cce884fa-873f-4a46-9caa-b8f88720db78\") " Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.459263 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs" (OuterVolumeSpecName: "logs") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.459469 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce884fa-873f-4a46-9caa-b8f88720db78-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.465310 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.467535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v" (OuterVolumeSpecName: "kube-api-access-hgv9v") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "kube-api-access-hgv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.492086 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.517609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.521694 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data" (OuterVolumeSpecName: "config-data") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.525734 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cce884fa-873f-4a46-9caa-b8f88720db78" (UID: "cce884fa-873f-4a46-9caa-b8f88720db78"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561824 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561860 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561876 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561888 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561903 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce884fa-873f-4a46-9caa-b8f88720db78-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:09 crc kubenswrapper[4985]: I0127 09:14:09.561918 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgv9v\" (UniqueName: \"kubernetes.io/projected/cce884fa-873f-4a46-9caa-b8f88720db78-kube-api-access-hgv9v\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.042382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4789b966-88v9q" event={"ID":"cce884fa-873f-4a46-9caa-b8f88720db78","Type":"ContainerDied","Data":"6d1d1e0016dacb548c92a954f746ccc28b374a4a3ed34e3fa3ce1e58c3230d52"} Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.042446 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4789b966-88v9q" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.042491 4985 scope.go:117] "RemoveContainer" containerID="856ba4aeedb6e76a2132db42cdbb6a15e54145f06dd72ef137282857e68f28d3" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.089425 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.098650 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d4789b966-88v9q"] Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.101789 4985 scope.go:117] "RemoveContainer" containerID="98b4020986d2865733874c979c1de64770b25c9e9ac453f22ec06ab8b8c43c9e" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.424691 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.469909 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" path="/var/lib/kubelet/pods/cce884fa-873f-4a46-9caa-b8f88720db78/volumes" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.486385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle\") pod \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.486648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data\") pod \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.486800 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf54l\" (UniqueName: \"kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l\") pod \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.487710 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts\") pod \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\" (UID: \"488cf0d5-caf5-4a7c-966c-233b758c0dcd\") " Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.491498 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts" (OuterVolumeSpecName: "scripts") pod "488cf0d5-caf5-4a7c-966c-233b758c0dcd" (UID: "488cf0d5-caf5-4a7c-966c-233b758c0dcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.492162 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l" (OuterVolumeSpecName: "kube-api-access-cf54l") pod "488cf0d5-caf5-4a7c-966c-233b758c0dcd" (UID: "488cf0d5-caf5-4a7c-966c-233b758c0dcd"). InnerVolumeSpecName "kube-api-access-cf54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.518304 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data" (OuterVolumeSpecName: "config-data") pod "488cf0d5-caf5-4a7c-966c-233b758c0dcd" (UID: "488cf0d5-caf5-4a7c-966c-233b758c0dcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.521000 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "488cf0d5-caf5-4a7c-966c-233b758c0dcd" (UID: "488cf0d5-caf5-4a7c-966c-233b758c0dcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.589417 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.589777 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.589911 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488cf0d5-caf5-4a7c-966c-233b758c0dcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:10 crc kubenswrapper[4985]: I0127 09:14:10.589999 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf54l\" (UniqueName: \"kubernetes.io/projected/488cf0d5-caf5-4a7c-966c-233b758c0dcd-kube-api-access-cf54l\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.054999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" event={"ID":"488cf0d5-caf5-4a7c-966c-233b758c0dcd","Type":"ContainerDied","Data":"7ead77a08a069bd538aa4fa04b6108aebbff021d06386d6d778dc43e5b8ef56f"} Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.056531 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ead77a08a069bd538aa4fa04b6108aebbff021d06386d6d778dc43e5b8ef56f" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.055075 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dbnnn" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.146056 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.146802 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-log" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.146901 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-log" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.146984 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-httpd" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147067 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-httpd" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.147132 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api-log" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147191 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api-log" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.147270 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147326 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-api" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.147391 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147444 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-api" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.147536 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147592 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api" Jan 27 09:14:11 crc kubenswrapper[4985]: E0127 09:14:11.147651 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488cf0d5-caf5-4a7c-966c-233b758c0dcd" containerName="nova-cell0-conductor-db-sync" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.147707 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="488cf0d5-caf5-4a7c-966c-233b758c0dcd" containerName="nova-cell0-conductor-db-sync" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148004 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="488cf0d5-caf5-4a7c-966c-233b758c0dcd" containerName="nova-cell0-conductor-db-sync" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148085 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148169 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148237 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33ce189-793b-4b46-b8af-f37059e4eacf" containerName="placement-log" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148316 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api-log" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148384 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca7d18-9f0b-4c2e-aaef-39fb4d810616" containerName="neutron-httpd" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.148452 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce884fa-873f-4a46-9caa-b8f88720db78" containerName="barbican-api" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.149224 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.151655 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.155536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bvf4t" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.172563 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.306839 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.306909 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.307963 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zc4q\" (UniqueName: \"kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.410044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zc4q\" (UniqueName: \"kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.410349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.410478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.415266 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.423820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.427739 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zc4q\" (UniqueName: \"kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q\") pod \"nova-cell0-conductor-0\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.468441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.828342 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.828680 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:14:11 crc kubenswrapper[4985]: I0127 09:14:11.914403 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:14:12 crc kubenswrapper[4985]: I0127 09:14:12.067227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e004edc1-a270-47e3-a299-3f798588eb34","Type":"ContainerStarted","Data":"6a7ac416c60d96c5f43f6693f7b21dad275061881b7118b03cbbbb6de5653ca8"} Jan 27 09:14:13 crc kubenswrapper[4985]: I0127 09:14:13.078448 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e004edc1-a270-47e3-a299-3f798588eb34","Type":"ContainerStarted","Data":"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f"} Jan 27 09:14:13 crc kubenswrapper[4985]: I0127 09:14:13.078813 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:13 crc kubenswrapper[4985]: I0127 09:14:13.107958 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.107913103 podStartE2EDuration="2.107913103s" podCreationTimestamp="2026-01-27 09:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:13.0997728 +0000 UTC m=+1237.390867661" watchObservedRunningTime="2026-01-27 09:14:13.107913103 +0000 UTC m=+1237.399007954" Jan 27 09:14:14 crc kubenswrapper[4985]: W0127 09:14:14.154838 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488cf0d5_caf5_4a7c_966c_233b758c0dcd.slice/crio-conmon-799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488cf0d5_caf5_4a7c_966c_233b758c0dcd.slice/crio-conmon-799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c.scope: no such file or directory Jan 27 09:14:14 crc kubenswrapper[4985]: W0127 09:14:14.155461 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488cf0d5_caf5_4a7c_966c_233b758c0dcd.slice/crio-799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488cf0d5_caf5_4a7c_966c_233b758c0dcd.slice/crio-799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c.scope: no such file or directory Jan 27 09:14:14 crc kubenswrapper[4985]: W0127 09:14:14.179613 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd33ce189_793b_4b46_b8af_f37059e4eacf.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd33ce189_793b_4b46_b8af_f37059e4eacf.slice: no such file or directory Jan 27 09:14:14 crc kubenswrapper[4985]: E0127 09:14:14.362420 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7d78ce_005f_4c67_9204_5030a19420e2.slice/crio-9cde7cfa6efaa8703f76824b483ce813cbebf258f3881c79c3ad1d2ed4098215\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice/crio-6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7d78ce_005f_4c67_9204_5030a19420e2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7d78ce_005f_4c67_9204_5030a19420e2.slice/crio-ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice/crio-conmon-6ae396bbef9791a483b51f2954792792029ca9aac9d07fc90781cf0d3b35165e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice/crio-conmon-9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7d78ce_005f_4c67_9204_5030a19420e2.slice/crio-conmon-856684a913bcd437de4dc925fa7f23219e2a55a1e0d5b8265d43c52ba0a66e9b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice/crio-9061df571495be79876a29751e086f7d0288618c89245c3a4e8941cfa84b1253.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c55baf3_752e_40a7_acdd_d26df561bf9c.slice/crio-982ff01c68f332ecc4d1ae9e811b55f90dc92f635e1ae47abe6119e93f41e269\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7d78ce_005f_4c67_9204_5030a19420e2.slice/crio-conmon-ad45e248b765e63b3b93c4a7ad2b343fc62733757eb3b4156c46e63d7d01eb12.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.603326 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.768935 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.769041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.769086 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.769176 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.770998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.771053 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.771103 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9x95\" (UniqueName: \"kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95\") pod \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\" (UID: \"b0982e77-fbf8-4db6-a5b4-359ec47691b4\") " Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.775936 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.779223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95" (OuterVolumeSpecName: "kube-api-access-g9x95") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "kube-api-access-g9x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.824390 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.831742 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config" (OuterVolumeSpecName: "config") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.840323 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.855286 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.868609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b0982e77-fbf8-4db6-a5b4-359ec47691b4" (UID: "b0982e77-fbf8-4db6-a5b4-359ec47691b4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874081 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874113 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874126 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874135 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874144 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874154 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9x95\" (UniqueName: \"kubernetes.io/projected/b0982e77-fbf8-4db6-a5b4-359ec47691b4-kube-api-access-g9x95\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:14 crc kubenswrapper[4985]: I0127 09:14:14.874163 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0982e77-fbf8-4db6-a5b4-359ec47691b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.097500 4985 generic.go:334] "Generic (PLEG): container finished" podID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerID="a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd" exitCode=0 Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.097575 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7645cd55cc-6b9mt" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.097601 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerDied","Data":"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd"} Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.097633 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7645cd55cc-6b9mt" event={"ID":"b0982e77-fbf8-4db6-a5b4-359ec47691b4","Type":"ContainerDied","Data":"881926c884e4b59a8470adc6f282fe3b29f997f0e06358519d2b0f597fa15f12"} Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.097651 4985 scope.go:117] "RemoveContainer" containerID="8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.119938 4985 scope.go:117] "RemoveContainer" containerID="a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.141382 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.144199 4985 scope.go:117] "RemoveContainer" containerID="8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb" Jan 27 09:14:15 crc kubenswrapper[4985]: E0127 09:14:15.144972 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb\": container with ID starting with 8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb not found: ID does not exist" containerID="8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.145010 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb"} err="failed to get container status \"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb\": rpc error: code = NotFound desc = could not find container \"8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb\": container with ID starting with 8c3dcd1d0d5d358abcefbba501099238fc14669e39da1c4c65fba92cc89f8fbb not found: ID does not exist" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.145039 4985 scope.go:117] "RemoveContainer" containerID="a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd" Jan 27 09:14:15 crc kubenswrapper[4985]: E0127 09:14:15.145567 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd\": container with ID starting with a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd not found: ID does not exist" containerID="a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.145604 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd"} err="failed to get container status \"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd\": rpc error: code = NotFound desc = could not find container \"a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd\": container with ID starting with a316fbc33ca53f7064ef55db8e11b28a8f8c1807656d0b022ceca73db35dd0cd not found: ID does not exist" Jan 27 09:14:15 crc kubenswrapper[4985]: I0127 09:14:15.156541 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7645cd55cc-6b9mt"] Jan 27 09:14:16 crc kubenswrapper[4985]: I0127 09:14:16.465941 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" path="/var/lib/kubelet/pods/b0982e77-fbf8-4db6-a5b4-359ec47691b4/volumes" Jan 27 09:14:18 crc kubenswrapper[4985]: I0127 09:14:18.126395 4985 generic.go:334] "Generic (PLEG): container finished" podID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerID="3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a" exitCode=0 Jan 27 09:14:18 crc kubenswrapper[4985]: I0127 09:14:18.126470 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerDied","Data":"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a"} Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.501796 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.943441 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h542g"] Jan 27 09:14:21 crc kubenswrapper[4985]: E0127 09:14:21.944182 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-httpd" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.944201 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-httpd" Jan 27 09:14:21 crc kubenswrapper[4985]: E0127 09:14:21.944218 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-api" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.944225 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-api" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.944402 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-api" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.944417 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0982e77-fbf8-4db6-a5b4-359ec47691b4" containerName="neutron-httpd" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.945049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.947640 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.947831 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 09:14:21 crc kubenswrapper[4985]: I0127 09:14:21.964308 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h542g"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.018368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.018449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.018572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54nc\" (UniqueName: \"kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.018603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.119947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.120075 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c54nc\" (UniqueName: \"kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.120105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.120129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.137953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.149216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.159236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c54nc\" (UniqueName: \"kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.161261 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h542g\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.176872 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.178155 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.186917 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.195351 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.196870 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.198898 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.223943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.223991 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.224052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6lqw\" (UniqueName: \"kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.224166 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.224187 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtqg\" (UniqueName: \"kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.224207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.230595 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.253752 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.275915 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.277434 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.281252 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.298000 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.314549 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330063 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtqg\" (UniqueName: \"kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330154 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330177 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6lqw\" (UniqueName: \"kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.330358 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqx5h\" (UniqueName: \"kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.340757 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.343730 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.348201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.354221 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.387606 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.389333 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.393577 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtqg\" (UniqueName: \"kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg\") pod \"nova-scheduler-0\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.402720 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.406934 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.416301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6lqw\" (UniqueName: \"kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.419732 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.421481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.431082 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.433460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.433647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.433750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.433875 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvxz2\" (UniqueName: \"kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434155 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqx5h\" (UniqueName: \"kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4jq\" (UniqueName: \"kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434831 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.434987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.435104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.435225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.438196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.442282 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.458830 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.471568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqx5h\" (UniqueName: \"kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h\") pod \"nova-metadata-0\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.545451 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.546736 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.550161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.551752 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.551808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.551846 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.551956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.552355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvxz2\" (UniqueName: \"kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.552467 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.552730 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.552815 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.553207 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.553439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4jq\" (UniqueName: \"kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.557481 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.562030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.564377 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.565146 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.565457 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.567268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.576075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4jq\" (UniqueName: \"kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq\") pod \"dnsmasq-dns-647df7b8c5-hdgcj\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.580948 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvxz2\" (UniqueName: \"kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2\") pod \"nova-api-0\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.597305 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.597881 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.635427 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.882067 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:22 crc kubenswrapper[4985]: I0127 09:14:22.927317 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h542g"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.160436 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.184226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerStarted","Data":"b2b04b5bb3c09e889bbafdc68abdbc65355eb0c75f7aaff495cb723371d82b5d"} Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.185560 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h542g" event={"ID":"f690b134-393f-40a7-b254-7b95dc81afcf","Type":"ContainerStarted","Data":"53d019d5fa1632e59f5aff39f202c4d0879480cd2295002f4665dc065dc547d2"} Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.370825 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.381966 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhg5m"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.383689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.386665 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.387243 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.393463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhg5m"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.406631 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.475444 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.475684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.475741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kh5\" (UniqueName: \"kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.475791 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.523276 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.538070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.578032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.578159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.578199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kh5\" (UniqueName: \"kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.578240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.585968 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.586245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.588880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.613385 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kh5\" (UniqueName: \"kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5\") pod \"nova-cell1-conductor-db-sync-fhg5m\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:23 crc kubenswrapper[4985]: I0127 09:14:23.840684 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.219288 4985 generic.go:334] "Generic (PLEG): container finished" podID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerID="f2b7ab185b897b0e9da8210682a09566387e587c9d1b2294e8e1840ac2039731" exitCode=0 Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.219392 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" event={"ID":"d5758db5-8df4-4e50-a1b0-71ea5996f09a","Type":"ContainerDied","Data":"f2b7ab185b897b0e9da8210682a09566387e587c9d1b2294e8e1840ac2039731"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.220015 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" event={"ID":"d5758db5-8df4-4e50-a1b0-71ea5996f09a","Type":"ContainerStarted","Data":"3d19929d43f77ca97fdea73590bf0718dc7eda8e0f399a9935c6401ba7abcd25"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.236934 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerStarted","Data":"c496f1ec107c7d4b88cf04362e6a56751ee2134c457d1938c49f81f6b80f862e"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.253029 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h542g" event={"ID":"f690b134-393f-40a7-b254-7b95dc81afcf","Type":"ContainerStarted","Data":"6a7573126217ca48110120fde894f2b6ad2ea692912d430e18f51c77e5a99b04"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.258430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9edf8e3a-1b54-4391-bd6f-fce724acd66b","Type":"ContainerStarted","Data":"00c99b05ed1f17a49ec32d5574937451564a790149fb733b11ed72d65ead00fd"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.263792 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342d32a2-6e30-42d4-9f54-8e1ab315ae53","Type":"ContainerStarted","Data":"470513bb92e47c0580e747ca6a2e5faf49c1705866f1edb3b8921caed63d9cdf"} Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.279630 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h542g" podStartSLOduration=3.279612045 podStartE2EDuration="3.279612045s" podCreationTimestamp="2026-01-27 09:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:24.278363161 +0000 UTC m=+1248.569458002" watchObservedRunningTime="2026-01-27 09:14:24.279612045 +0000 UTC m=+1248.570706886" Jan 27 09:14:24 crc kubenswrapper[4985]: I0127 09:14:24.382195 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhg5m"] Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.287991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" event={"ID":"87b772fa-eb86-4e1a-8f59-bc3c1748ec07","Type":"ContainerStarted","Data":"99ea2c355c891717787ae12c5ef6ebffb92611dc1d13cfef5670b261a564babd"} Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.288600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" event={"ID":"87b772fa-eb86-4e1a-8f59-bc3c1748ec07","Type":"ContainerStarted","Data":"74d11416a8f8f59be450080f2db727e794336bb8c762edba16e8ce8d052d0179"} Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.302399 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" event={"ID":"d5758db5-8df4-4e50-a1b0-71ea5996f09a","Type":"ContainerStarted","Data":"3457662f6fff19e6fc4a64cc3c356cc797c57d34968aaac15edcb4be4a5e0e99"} Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.302589 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.318977 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" podStartSLOduration=2.318947839 podStartE2EDuration="2.318947839s" podCreationTimestamp="2026-01-27 09:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:25.314787916 +0000 UTC m=+1249.605882757" watchObservedRunningTime="2026-01-27 09:14:25.318947839 +0000 UTC m=+1249.610042680" Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.340222 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" podStartSLOduration=3.340202923 podStartE2EDuration="3.340202923s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:25.334026964 +0000 UTC m=+1249.625121815" watchObservedRunningTime="2026-01-27 09:14:25.340202923 +0000 UTC m=+1249.631297764" Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.725284 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:25 crc kubenswrapper[4985]: I0127 09:14:25.737327 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.326820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9edf8e3a-1b54-4391-bd6f-fce724acd66b","Type":"ContainerStarted","Data":"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.326923 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809" gracePeriod=30 Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.330648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342d32a2-6e30-42d4-9f54-8e1ab315ae53","Type":"ContainerStarted","Data":"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.334551 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerStarted","Data":"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.334590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerStarted","Data":"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.334698 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-log" containerID="cri-o://435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" gracePeriod=30 Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.334972 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-metadata" containerID="cri-o://375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" gracePeriod=30 Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.342023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerStarted","Data":"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.342071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerStarted","Data":"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1"} Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.363248 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.199884278 podStartE2EDuration="5.363224685s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="2026-01-27 09:14:23.370830591 +0000 UTC m=+1247.661925432" lastFinishedPulling="2026-01-27 09:14:26.534170998 +0000 UTC m=+1250.825265839" observedRunningTime="2026-01-27 09:14:27.346223809 +0000 UTC m=+1251.637318650" watchObservedRunningTime="2026-01-27 09:14:27.363224685 +0000 UTC m=+1251.654319526" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.378934 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.018095164 podStartE2EDuration="5.378912426s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="2026-01-27 09:14:23.171167266 +0000 UTC m=+1247.462262107" lastFinishedPulling="2026-01-27 09:14:26.531984528 +0000 UTC m=+1250.823079369" observedRunningTime="2026-01-27 09:14:27.371209284 +0000 UTC m=+1251.662304125" watchObservedRunningTime="2026-01-27 09:14:27.378912426 +0000 UTC m=+1251.670007267" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.407394 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.400347147 podStartE2EDuration="5.407356626s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="2026-01-27 09:14:23.526596854 +0000 UTC m=+1247.817691695" lastFinishedPulling="2026-01-27 09:14:26.533606333 +0000 UTC m=+1250.824701174" observedRunningTime="2026-01-27 09:14:27.398031869 +0000 UTC m=+1251.689126740" watchObservedRunningTime="2026-01-27 09:14:27.407356626 +0000 UTC m=+1251.698451487" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.424630 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.398702982 podStartE2EDuration="5.424606289s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="2026-01-27 09:14:23.513027852 +0000 UTC m=+1247.804122693" lastFinishedPulling="2026-01-27 09:14:26.538931159 +0000 UTC m=+1250.830026000" observedRunningTime="2026-01-27 09:14:27.418213854 +0000 UTC m=+1251.709308705" watchObservedRunningTime="2026-01-27 09:14:27.424606289 +0000 UTC m=+1251.715701130" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.546821 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.546866 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.598322 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:27 crc kubenswrapper[4985]: I0127 09:14:27.653525 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.030111 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.176093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle\") pod \"779d830e-4172-48f4-9631-c002b97f0ecb\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.176212 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data\") pod \"779d830e-4172-48f4-9631-c002b97f0ecb\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.176290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs\") pod \"779d830e-4172-48f4-9631-c002b97f0ecb\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.176346 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqx5h\" (UniqueName: \"kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h\") pod \"779d830e-4172-48f4-9631-c002b97f0ecb\" (UID: \"779d830e-4172-48f4-9631-c002b97f0ecb\") " Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.177164 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs" (OuterVolumeSpecName: "logs") pod "779d830e-4172-48f4-9631-c002b97f0ecb" (UID: "779d830e-4172-48f4-9631-c002b97f0ecb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.177588 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/779d830e-4172-48f4-9631-c002b97f0ecb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.197990 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h" (OuterVolumeSpecName: "kube-api-access-xqx5h") pod "779d830e-4172-48f4-9631-c002b97f0ecb" (UID: "779d830e-4172-48f4-9631-c002b97f0ecb"). InnerVolumeSpecName "kube-api-access-xqx5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.207079 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "779d830e-4172-48f4-9631-c002b97f0ecb" (UID: "779d830e-4172-48f4-9631-c002b97f0ecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.209864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data" (OuterVolumeSpecName: "config-data") pod "779d830e-4172-48f4-9631-c002b97f0ecb" (UID: "779d830e-4172-48f4-9631-c002b97f0ecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.279697 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.279971 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779d830e-4172-48f4-9631-c002b97f0ecb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.279982 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqx5h\" (UniqueName: \"kubernetes.io/projected/779d830e-4172-48f4-9631-c002b97f0ecb-kube-api-access-xqx5h\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.357758 4985 generic.go:334] "Generic (PLEG): container finished" podID="779d830e-4172-48f4-9631-c002b97f0ecb" containerID="375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" exitCode=0 Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.357801 4985 generic.go:334] "Generic (PLEG): container finished" podID="779d830e-4172-48f4-9631-c002b97f0ecb" containerID="435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" exitCode=143 Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.358769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerDied","Data":"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824"} Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.358834 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerDied","Data":"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491"} Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.358848 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"779d830e-4172-48f4-9631-c002b97f0ecb","Type":"ContainerDied","Data":"b2b04b5bb3c09e889bbafdc68abdbc65355eb0c75f7aaff495cb723371d82b5d"} Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.358867 4985 scope.go:117] "RemoveContainer" containerID="375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.359590 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.381795 4985 scope.go:117] "RemoveContainer" containerID="435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.406332 4985 scope.go:117] "RemoveContainer" containerID="375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" Jan 27 09:14:28 crc kubenswrapper[4985]: E0127 09:14:28.406874 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824\": container with ID starting with 375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824 not found: ID does not exist" containerID="375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.406903 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824"} err="failed to get container status \"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824\": rpc error: code = NotFound desc = could not find container \"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824\": container with ID starting with 375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824 not found: ID does not exist" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.406924 4985 scope.go:117] "RemoveContainer" containerID="435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" Jan 27 09:14:28 crc kubenswrapper[4985]: E0127 09:14:28.407185 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491\": container with ID starting with 435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491 not found: ID does not exist" containerID="435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.407213 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491"} err="failed to get container status \"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491\": rpc error: code = NotFound desc = could not find container \"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491\": container with ID starting with 435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491 not found: ID does not exist" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.407230 4985 scope.go:117] "RemoveContainer" containerID="375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.407873 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824"} err="failed to get container status \"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824\": rpc error: code = NotFound desc = could not find container \"375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824\": container with ID starting with 375ff64cc08cce0402e4b01c5c74fbb468979311c7ae166aad32ae3b1bc0c824 not found: ID does not exist" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.407922 4985 scope.go:117] "RemoveContainer" containerID="435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.408225 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491"} err="failed to get container status \"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491\": rpc error: code = NotFound desc = could not find container \"435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491\": container with ID starting with 435550e326606b36d28d6029c6277ef8757a4a2246c41b7d9f577d22ef4ab491 not found: ID does not exist" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.410732 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.426914 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.434719 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:28 crc kubenswrapper[4985]: E0127 09:14:28.435166 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-metadata" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.435188 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-metadata" Jan 27 09:14:28 crc kubenswrapper[4985]: E0127 09:14:28.435224 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-log" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.435238 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-log" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.435433 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-log" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.435461 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" containerName="nova-metadata-metadata" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.436664 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.438813 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.440233 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.481849 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="779d830e-4172-48f4-9631-c002b97f0ecb" path="/var/lib/kubelet/pods/779d830e-4172-48f4-9631-c002b97f0ecb/volumes" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.482825 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.585050 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gb2k\" (UniqueName: \"kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.585509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.585833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.585910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.586027 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.687713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.687760 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.687807 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.687948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gb2k\" (UniqueName: \"kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.688082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.688465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.692784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.695335 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.704183 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.706777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gb2k\" (UniqueName: \"kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k\") pod \"nova-metadata-0\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " pod="openstack/nova-metadata-0" Jan 27 09:14:28 crc kubenswrapper[4985]: I0127 09:14:28.778205 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:29 crc kubenswrapper[4985]: I0127 09:14:29.271463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:29 crc kubenswrapper[4985]: I0127 09:14:29.381128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerStarted","Data":"06a40ee2908dca5d24e56e88d206a8826f5aebcbaf0eba19bdb8bfd6ccdadba9"} Jan 27 09:14:30 crc kubenswrapper[4985]: I0127 09:14:30.391378 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerStarted","Data":"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb"} Jan 27 09:14:30 crc kubenswrapper[4985]: I0127 09:14:30.391780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerStarted","Data":"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5"} Jan 27 09:14:30 crc kubenswrapper[4985]: I0127 09:14:30.412620 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.412596967 podStartE2EDuration="2.412596967s" podCreationTimestamp="2026-01-27 09:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:30.408438742 +0000 UTC m=+1254.699533593" watchObservedRunningTime="2026-01-27 09:14:30.412596967 +0000 UTC m=+1254.703691818" Jan 27 09:14:30 crc kubenswrapper[4985]: E0127 09:14:30.986763 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6c8f1c4497520a424d7fed32a7d2c871b5e4e944cb2a3070ae0cebc4a0c4952f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6c8f1c4497520a424d7fed32a7d2c871b5e4e944cb2a3070ae0cebc4a0c4952f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_swift-proxy-689489568f-6ggjw_3193865d-81a4-4cb6-baee-7f44246f4caa/proxy-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_swift-proxy-689489568f-6ggjw_3193865d-81a4-4cb6-baee-7f44246f4caa/proxy-httpd/0.log: no such file or directory Jan 27 09:14:31 crc kubenswrapper[4985]: I0127 09:14:31.189599 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 09:14:31 crc kubenswrapper[4985]: E0127 09:14:31.333551 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/94f0784ce07b099c50e2b354b9426b7faeabe3b2e9af185cca6723d3bc3e72ac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/94f0784ce07b099c50e2b354b9426b7faeabe3b2e9af185cca6723d3bc3e72ac/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_swift-proxy-689489568f-6ggjw_3193865d-81a4-4cb6-baee-7f44246f4caa/proxy-server/0.log" to get inode usage: stat /var/log/pods/openstack_swift-proxy-689489568f-6ggjw_3193865d-81a4-4cb6-baee-7f44246f4caa/proxy-server/0.log: no such file or directory Jan 27 09:14:31 crc kubenswrapper[4985]: I0127 09:14:31.403624 4985 generic.go:334] "Generic (PLEG): container finished" podID="87b772fa-eb86-4e1a-8f59-bc3c1748ec07" containerID="99ea2c355c891717787ae12c5ef6ebffb92611dc1d13cfef5670b261a564babd" exitCode=0 Jan 27 09:14:31 crc kubenswrapper[4985]: I0127 09:14:31.403708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" event={"ID":"87b772fa-eb86-4e1a-8f59-bc3c1748ec07","Type":"ContainerDied","Data":"99ea2c355c891717787ae12c5ef6ebffb92611dc1d13cfef5670b261a564babd"} Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.414040 4985 generic.go:334] "Generic (PLEG): container finished" podID="f690b134-393f-40a7-b254-7b95dc81afcf" containerID="6a7573126217ca48110120fde894f2b6ad2ea692912d430e18f51c77e5a99b04" exitCode=0 Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.414159 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h542g" event={"ID":"f690b134-393f-40a7-b254-7b95dc81afcf","Type":"ContainerDied","Data":"6a7573126217ca48110120fde894f2b6ad2ea692912d430e18f51c77e5a99b04"} Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.599776 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.636903 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.680073 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.680427 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="dnsmasq-dns" containerID="cri-o://196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81" gracePeriod=10 Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.707036 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.882985 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.883433 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:14:32 crc kubenswrapper[4985]: I0127 09:14:32.912480 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.015252 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle\") pod \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.015315 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4kh5\" (UniqueName: \"kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5\") pod \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.015382 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts\") pod \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.015586 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data\") pod \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\" (UID: \"87b772fa-eb86-4e1a-8f59-bc3c1748ec07\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.022453 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts" (OuterVolumeSpecName: "scripts") pod "87b772fa-eb86-4e1a-8f59-bc3c1748ec07" (UID: "87b772fa-eb86-4e1a-8f59-bc3c1748ec07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.024834 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5" (OuterVolumeSpecName: "kube-api-access-k4kh5") pod "87b772fa-eb86-4e1a-8f59-bc3c1748ec07" (UID: "87b772fa-eb86-4e1a-8f59-bc3c1748ec07"). InnerVolumeSpecName "kube-api-access-k4kh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.063240 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data" (OuterVolumeSpecName: "config-data") pod "87b772fa-eb86-4e1a-8f59-bc3c1748ec07" (UID: "87b772fa-eb86-4e1a-8f59-bc3c1748ec07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.064956 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b772fa-eb86-4e1a-8f59-bc3c1748ec07" (UID: "87b772fa-eb86-4e1a-8f59-bc3c1748ec07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.117525 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.117567 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.117579 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4kh5\" (UniqueName: \"kubernetes.io/projected/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-kube-api-access-k4kh5\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.117589 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b772fa-eb86-4e1a-8f59-bc3c1748ec07-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.208126 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.321681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.321810 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-272jr\" (UniqueName: \"kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.321993 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.322028 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.322058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.322179 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0\") pod \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\" (UID: \"02c25b4e-dee3-4466-9d56-f74c18a36ba5\") " Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.326434 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr" (OuterVolumeSpecName: "kube-api-access-272jr") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "kube-api-access-272jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.374692 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.375765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.376164 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.379481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config" (OuterVolumeSpecName: "config") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.388468 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "02c25b4e-dee3-4466-9d56-f74c18a36ba5" (UID: "02c25b4e-dee3-4466-9d56-f74c18a36ba5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.426745 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.426756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhg5m" event={"ID":"87b772fa-eb86-4e1a-8f59-bc3c1748ec07","Type":"ContainerDied","Data":"74d11416a8f8f59be450080f2db727e794336bb8c762edba16e8ce8d052d0179"} Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427059 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d11416a8f8f59be450080f2db727e794336bb8c762edba16e8ce8d052d0179" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427411 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427497 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427535 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427551 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427600 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02c25b4e-dee3-4466-9d56-f74c18a36ba5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.427617 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-272jr\" (UniqueName: \"kubernetes.io/projected/02c25b4e-dee3-4466-9d56-f74c18a36ba5-kube-api-access-272jr\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.429632 4985 generic.go:334] "Generic (PLEG): container finished" podID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerID="196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81" exitCode=0 Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.429664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" event={"ID":"02c25b4e-dee3-4466-9d56-f74c18a36ba5","Type":"ContainerDied","Data":"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81"} Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.429680 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.429699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-9qv57" event={"ID":"02c25b4e-dee3-4466-9d56-f74c18a36ba5","Type":"ContainerDied","Data":"f246a351c83b7ea42022ebb2c49acea804af633b425a03605c245bd6f7a0ad95"} Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.429718 4985 scope.go:117] "RemoveContainer" containerID="196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.482150 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.487775 4985 scope.go:117] "RemoveContainer" containerID="5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.506438 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.528274 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-9qv57"] Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.548733 4985 scope.go:117] "RemoveContainer" containerID="196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81" Jan 27 09:14:33 crc kubenswrapper[4985]: E0127 09:14:33.549696 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81\": container with ID starting with 196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81 not found: ID does not exist" containerID="196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.549749 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81"} err="failed to get container status \"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81\": rpc error: code = NotFound desc = could not find container \"196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81\": container with ID starting with 196e80eb434f8e0cbf2de893f3de47726322d8c1e695e85a7ed4f23e64065a81 not found: ID does not exist" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.549779 4985 scope.go:117] "RemoveContainer" containerID="5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5" Jan 27 09:14:33 crc kubenswrapper[4985]: E0127 09:14:33.550092 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5\": container with ID starting with 5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5 not found: ID does not exist" containerID="5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.550120 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5"} err="failed to get container status \"5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5\": rpc error: code = NotFound desc = could not find container \"5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5\": container with ID starting with 5dd6a1248190b55679fbb98c1769e89ec0104f3c2607dac8853ba6ba0504fbd5 not found: ID does not exist" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554287 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:14:33 crc kubenswrapper[4985]: E0127 09:14:33.554715 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="init" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554734 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="init" Jan 27 09:14:33 crc kubenswrapper[4985]: E0127 09:14:33.554753 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="dnsmasq-dns" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554762 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="dnsmasq-dns" Jan 27 09:14:33 crc kubenswrapper[4985]: E0127 09:14:33.554774 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b772fa-eb86-4e1a-8f59-bc3c1748ec07" containerName="nova-cell1-conductor-db-sync" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554780 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b772fa-eb86-4e1a-8f59-bc3c1748ec07" containerName="nova-cell1-conductor-db-sync" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554970 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" containerName="dnsmasq-dns" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.554983 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b772fa-eb86-4e1a-8f59-bc3c1748ec07" containerName="nova-cell1-conductor-db-sync" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.555681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.558995 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.581142 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.632613 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.632689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzfs\" (UniqueName: \"kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.632816 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.735018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.735140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.735201 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjzfs\" (UniqueName: \"kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.739684 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.769714 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.773335 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjzfs\" (UniqueName: \"kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs\") pod \"nova-cell1-conductor-0\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.779700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.780774 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.874931 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.935999 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.980672 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:33 crc kubenswrapper[4985]: I0127 09:14:33.980959 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.040319 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts\") pod \"f690b134-393f-40a7-b254-7b95dc81afcf\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.040628 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data\") pod \"f690b134-393f-40a7-b254-7b95dc81afcf\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.040802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle\") pod \"f690b134-393f-40a7-b254-7b95dc81afcf\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.041326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c54nc\" (UniqueName: \"kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc\") pod \"f690b134-393f-40a7-b254-7b95dc81afcf\" (UID: \"f690b134-393f-40a7-b254-7b95dc81afcf\") " Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.053113 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc" (OuterVolumeSpecName: "kube-api-access-c54nc") pod "f690b134-393f-40a7-b254-7b95dc81afcf" (UID: "f690b134-393f-40a7-b254-7b95dc81afcf"). InnerVolumeSpecName "kube-api-access-c54nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.062079 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts" (OuterVolumeSpecName: "scripts") pod "f690b134-393f-40a7-b254-7b95dc81afcf" (UID: "f690b134-393f-40a7-b254-7b95dc81afcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.103763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f690b134-393f-40a7-b254-7b95dc81afcf" (UID: "f690b134-393f-40a7-b254-7b95dc81afcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.124525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data" (OuterVolumeSpecName: "config-data") pod "f690b134-393f-40a7-b254-7b95dc81afcf" (UID: "f690b134-393f-40a7-b254-7b95dc81afcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.144971 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c54nc\" (UniqueName: \"kubernetes.io/projected/f690b134-393f-40a7-b254-7b95dc81afcf-kube-api-access-c54nc\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.145018 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.145033 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.145046 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f690b134-393f-40a7-b254-7b95dc81afcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.432865 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.445264 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h542g" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.445291 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h542g" event={"ID":"f690b134-393f-40a7-b254-7b95dc81afcf","Type":"ContainerDied","Data":"53d019d5fa1632e59f5aff39f202c4d0879480cd2295002f4665dc065dc547d2"} Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.445387 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d019d5fa1632e59f5aff39f202c4d0879480cd2295002f4665dc065dc547d2" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.472734 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c25b4e-dee3-4466-9d56-f74c18a36ba5" path="/var/lib/kubelet/pods/02c25b4e-dee3-4466-9d56-f74c18a36ba5/volumes" Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.566737 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.567142 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-log" containerID="cri-o://528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1" gracePeriod=30 Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.568074 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-api" containerID="cri-o://d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449" gracePeriod=30 Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.638073 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:34 crc kubenswrapper[4985]: I0127 09:14:34.676953 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:35 crc kubenswrapper[4985]: E0127 09:14:35.135983 4985 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/16e957c700d4ba770db57ca9ec8c6345baccf55c409da9413d2a6dc1b2fdbebe/diff" to get inode usage: stat /var/lib/containers/storage/overlay/16e957c700d4ba770db57ca9ec8c6345baccf55c409da9413d2a6dc1b2fdbebe/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_neutron-6798f6b777-jp82x_6aca7d18-9f0b-4c2e-aaef-39fb4d810616/neutron-api/0.log" to get inode usage: stat /var/log/pods/openstack_neutron-6798f6b777-jp82x_6aca7d18-9f0b-4c2e-aaef-39fb4d810616/neutron-api/0.log: no such file or directory Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.455884 4985 generic.go:334] "Generic (PLEG): container finished" podID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerID="528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1" exitCode=143 Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.455912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerDied","Data":"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1"} Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.459013 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerName="nova-scheduler-scheduler" containerID="cri-o://159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" gracePeriod=30 Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.460126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"116ef18b-261c-457e-a687-782db009b9de","Type":"ContainerStarted","Data":"d98f7170e8ba3c4bfdcae8597c61d70d73bca409cc420c41664686ac83d6e6b4"} Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.460171 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.460185 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"116ef18b-261c-457e-a687-782db009b9de","Type":"ContainerStarted","Data":"0c16384d03b12bbb9d1b60af0575da964e0ef6e0545dd550f97c32e271b98403"} Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.460334 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-log" containerID="cri-o://a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" gracePeriod=30 Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.460798 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-metadata" containerID="cri-o://314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" gracePeriod=30 Jan 27 09:14:35 crc kubenswrapper[4985]: I0127 09:14:35.493088 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4930690110000002 podStartE2EDuration="2.493069011s" podCreationTimestamp="2026-01-27 09:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:35.473892565 +0000 UTC m=+1259.764987406" watchObservedRunningTime="2026-01-27 09:14:35.493069011 +0000 UTC m=+1259.784163852" Jan 27 09:14:35 crc kubenswrapper[4985]: W0127 09:14:35.970506 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf690b134_393f_40a7_b254_7b95dc81afcf.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf690b134_393f_40a7_b254_7b95dc81afcf.slice: no such file or directory Jan 27 09:14:35 crc kubenswrapper[4985]: W0127 09:14:35.972887 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod779d830e_4172_48f4_9631_c002b97f0ecb.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod779d830e_4172_48f4_9631_c002b97f0ecb.slice: no such file or directory Jan 27 09:14:35 crc kubenswrapper[4985]: W0127 09:14:35.973846 4985 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b772fa_eb86_4e1a_8f59_bc3c1748ec07.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b772fa_eb86_4e1a_8f59_bc3c1748ec07.slice: no such file or directory Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.150173 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.206914 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs\") pod \"1304eb5b-330b-4480-99e8-4e0389cac214\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.207019 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle\") pod \"1304eb5b-330b-4480-99e8-4e0389cac214\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.207047 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs\") pod \"1304eb5b-330b-4480-99e8-4e0389cac214\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.207109 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gb2k\" (UniqueName: \"kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k\") pod \"1304eb5b-330b-4480-99e8-4e0389cac214\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.207160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data\") pod \"1304eb5b-330b-4480-99e8-4e0389cac214\" (UID: \"1304eb5b-330b-4480-99e8-4e0389cac214\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.208629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs" (OuterVolumeSpecName: "logs") pod "1304eb5b-330b-4480-99e8-4e0389cac214" (UID: "1304eb5b-330b-4480-99e8-4e0389cac214"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.212067 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.233376 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k" (OuterVolumeSpecName: "kube-api-access-4gb2k") pod "1304eb5b-330b-4480-99e8-4e0389cac214" (UID: "1304eb5b-330b-4480-99e8-4e0389cac214"). InnerVolumeSpecName "kube-api-access-4gb2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.245322 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data" (OuterVolumeSpecName: "config-data") pod "1304eb5b-330b-4480-99e8-4e0389cac214" (UID: "1304eb5b-330b-4480-99e8-4e0389cac214"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.251266 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1304eb5b-330b-4480-99e8-4e0389cac214" (UID: "1304eb5b-330b-4480-99e8-4e0389cac214"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.278942 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1304eb5b-330b-4480-99e8-4e0389cac214" (UID: "1304eb5b-330b-4480-99e8-4e0389cac214"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.288593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84f67698b-shkcs" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.310371 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1304eb5b-330b-4480-99e8-4e0389cac214-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.310417 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.310436 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.310452 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gb2k\" (UniqueName: \"kubernetes.io/projected/1304eb5b-330b-4480-99e8-4e0389cac214-kube-api-access-4gb2k\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.310464 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1304eb5b-330b-4480-99e8-4e0389cac214-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.364722 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.371527 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.371983 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-878b56798-5d5wm" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-log" containerID="cri-o://60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b" gracePeriod=30 Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.372202 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-878b56798-5d5wm" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-api" containerID="cri-o://29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9" gracePeriod=30 Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.415886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.415967 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.416022 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.416056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.416102 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.416163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h49b8\" (UniqueName: \"kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.416208 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle\") pod \"f2548757-fd02-4c5a-9623-0b1148405dc9\" (UID: \"f2548757-fd02-4c5a-9623-0b1148405dc9\") " Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.425030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.426982 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.430342 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8" (OuterVolumeSpecName: "kube-api-access-h49b8") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "kube-api-access-h49b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.442641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts" (OuterVolumeSpecName: "scripts") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.449232 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482535 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304eb5b-330b-4480-99e8-4e0389cac214" containerID="314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" exitCode=0 Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482573 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304eb5b-330b-4480-99e8-4e0389cac214" containerID="a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" exitCode=143 Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerDied","Data":"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb"} Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482706 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerDied","Data":"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5"} Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1304eb5b-330b-4480-99e8-4e0389cac214","Type":"ContainerDied","Data":"06a40ee2908dca5d24e56e88d206a8826f5aebcbaf0eba19bdb8bfd6ccdadba9"} Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482739 4985 scope.go:117] "RemoveContainer" containerID="314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.482938 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.503364 4985 generic.go:334] "Generic (PLEG): container finished" podID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerID="018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8" exitCode=137 Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.504270 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerDied","Data":"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8"} Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.504336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2548757-fd02-4c5a-9623-0b1148405dc9","Type":"ContainerDied","Data":"bff79c0552c944d46f48f254532bb01f0f90751b03fed3e8828cdf836253af0e"} Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.504581 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.518786 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.518819 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.518829 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.518842 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2548757-fd02-4c5a-9623-0b1148405dc9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.518855 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h49b8\" (UniqueName: \"kubernetes.io/projected/f2548757-fd02-4c5a-9623-0b1148405dc9-kube-api-access-h49b8\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.525192 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.551379 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.551881 4985 scope.go:117] "RemoveContainer" containerID="a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.575436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data" (OuterVolumeSpecName: "config-data") pod "f2548757-fd02-4c5a-9623-0b1148405dc9" (UID: "f2548757-fd02-4c5a-9623-0b1148405dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.578971 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.586335 4985 scope.go:117] "RemoveContainer" containerID="314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.586882 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb\": container with ID starting with 314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb not found: ID does not exist" containerID="314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.587033 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb"} err="failed to get container status \"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb\": rpc error: code = NotFound desc = could not find container \"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb\": container with ID starting with 314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.587097 4985 scope.go:117] "RemoveContainer" containerID="a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.589176 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5\": container with ID starting with a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5 not found: ID does not exist" containerID="a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.589231 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5"} err="failed to get container status \"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5\": rpc error: code = NotFound desc = could not find container \"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5\": container with ID starting with a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5 not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.589277 4985 scope.go:117] "RemoveContainer" containerID="314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.589674 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb"} err="failed to get container status \"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb\": rpc error: code = NotFound desc = could not find container \"314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb\": container with ID starting with 314226e9be699d83b611bc65be452270c94688d7386812ec2c68656bd3eabeeb not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.589706 4985 scope.go:117] "RemoveContainer" containerID="a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.590206 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5"} err="failed to get container status \"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5\": rpc error: code = NotFound desc = could not find container \"a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5\": container with ID starting with a3ff962d5fae840285351775e6fc8acaa2294750764ef453c4d16182dd6315c5 not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.590233 4985 scope.go:117] "RemoveContainer" containerID="018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.602811 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603372 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="sg-core" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603391 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="sg-core" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603416 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="proxy-httpd" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603427 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="proxy-httpd" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603446 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-log" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603454 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-log" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603466 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-notification-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603475 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-notification-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-central-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603504 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-central-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603564 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-metadata" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603572 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-metadata" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.603620 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f690b134-393f-40a7-b254-7b95dc81afcf" containerName="nova-manage" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603627 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f690b134-393f-40a7-b254-7b95dc81afcf" containerName="nova-manage" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603859 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f690b134-393f-40a7-b254-7b95dc81afcf" containerName="nova-manage" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603883 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-metadata" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603900 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-notification-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603912 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="proxy-httpd" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603925 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="ceilometer-central-agent" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603941 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" containerName="nova-metadata-log" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.603956 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" containerName="sg-core" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.605443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.608115 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.608464 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.613200 4985 scope.go:117] "RemoveContainer" containerID="0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.613565 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.620978 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621068 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621122 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5s5j\" (UniqueName: \"kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621210 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621290 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.621304 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2548757-fd02-4c5a-9623-0b1148405dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.643066 4985 scope.go:117] "RemoveContainer" containerID="42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.675435 4985 scope.go:117] "RemoveContainer" containerID="3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.702134 4985 scope.go:117] "RemoveContainer" containerID="018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.702629 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8\": container with ID starting with 018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8 not found: ID does not exist" containerID="018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.702660 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8"} err="failed to get container status \"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8\": rpc error: code = NotFound desc = could not find container \"018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8\": container with ID starting with 018ff50df1a8da325a2f321cd3f0eb9ed962cc8efff0d1c891f883279e57dee8 not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.702681 4985 scope.go:117] "RemoveContainer" containerID="0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.703042 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c\": container with ID starting with 0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c not found: ID does not exist" containerID="0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.703062 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c"} err="failed to get container status \"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c\": rpc error: code = NotFound desc = could not find container \"0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c\": container with ID starting with 0c35d836a38ab3c2a3d5010a228c5e71358bb0b7fa3001c8e7960fd7cce6534c not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.703094 4985 scope.go:117] "RemoveContainer" containerID="42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.703449 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa\": container with ID starting with 42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa not found: ID does not exist" containerID="42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.703497 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa"} err="failed to get container status \"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa\": rpc error: code = NotFound desc = could not find container \"42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa\": container with ID starting with 42e35bff8fdccceaadc732a11fa0acf1c045ab3e220397def9ab3cab466646fa not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.703548 4985 scope.go:117] "RemoveContainer" containerID="3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a" Jan 27 09:14:36 crc kubenswrapper[4985]: E0127 09:14:36.704106 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a\": container with ID starting with 3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a not found: ID does not exist" containerID="3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.704165 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a"} err="failed to get container status \"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a\": rpc error: code = NotFound desc = could not find container \"3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a\": container with ID starting with 3df23e6d595b9fe13d34edf998e02c12278353678a0c975beedcd710e25bcf0a not found: ID does not exist" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.723366 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5s5j\" (UniqueName: \"kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.723590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.723629 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.723695 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.724083 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.725685 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.726744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.726935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.729963 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.739672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5s5j\" (UniqueName: \"kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j\") pod \"nova-metadata-0\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " pod="openstack/nova-metadata-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.839060 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.869623 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.889462 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.892088 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.896227 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.896310 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.900471 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.928741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.928818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.928967 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.928999 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfqh\" (UniqueName: \"kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.929206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.929284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.929320 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:36 crc kubenswrapper[4985]: I0127 09:14:36.942317 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.031251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.031546 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.031576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.031638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.031720 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.032963 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.032986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfqh\" (UniqueName: \"kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.033708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.033957 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.037387 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.037908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.042057 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.057000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.058832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfqh\" (UniqueName: \"kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh\") pod \"ceilometer-0\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.215023 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.434787 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.528974 4985 generic.go:334] "Generic (PLEG): container finished" podID="58162a9a-ce9b-41af-a664-a360c97d40af" containerID="60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b" exitCode=143 Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.529059 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerDied","Data":"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b"} Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.533343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerStarted","Data":"77c544da084cc426cf03ce5edf9c5585481f58cb21105aae9ebb4e6ec3ab3b02"} Jan 27 09:14:37 crc kubenswrapper[4985]: E0127 09:14:37.638480 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:14:37 crc kubenswrapper[4985]: E0127 09:14:37.640452 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:14:37 crc kubenswrapper[4985]: E0127 09:14:37.641599 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:14:37 crc kubenswrapper[4985]: E0127 09:14:37.641636 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerName="nova-scheduler-scheduler" Jan 27 09:14:37 crc kubenswrapper[4985]: I0127 09:14:37.672605 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.463269 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1304eb5b-330b-4480-99e8-4e0389cac214" path="/var/lib/kubelet/pods/1304eb5b-330b-4480-99e8-4e0389cac214/volumes" Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.464691 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2548757-fd02-4c5a-9623-0b1148405dc9" path="/var/lib/kubelet/pods/f2548757-fd02-4c5a-9623-0b1148405dc9/volumes" Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.546280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerStarted","Data":"803ce15e6f511ee278d47a58b7973beb894bcfed820b20202797144ceb5c9002"} Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.546327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerStarted","Data":"4fe5cc291f2eb6395dcd949fcbceafb5bda67a42abb10fc78fecd81e9d993fbb"} Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.548242 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerStarted","Data":"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc"} Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.548296 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerStarted","Data":"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c"} Jan 27 09:14:38 crc kubenswrapper[4985]: I0127 09:14:38.570744 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.570718358 podStartE2EDuration="2.570718358s" podCreationTimestamp="2026-01-27 09:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:38.566993025 +0000 UTC m=+1262.858087866" watchObservedRunningTime="2026-01-27 09:14:38.570718358 +0000 UTC m=+1262.861813189" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.095440 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.203162 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle\") pod \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.203226 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwtqg\" (UniqueName: \"kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg\") pod \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.203389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data\") pod \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\" (UID: \"342d32a2-6e30-42d4-9f54-8e1ab315ae53\") " Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.208073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg" (OuterVolumeSpecName: "kube-api-access-wwtqg") pod "342d32a2-6e30-42d4-9f54-8e1ab315ae53" (UID: "342d32a2-6e30-42d4-9f54-8e1ab315ae53"). InnerVolumeSpecName "kube-api-access-wwtqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.242268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data" (OuterVolumeSpecName: "config-data") pod "342d32a2-6e30-42d4-9f54-8e1ab315ae53" (UID: "342d32a2-6e30-42d4-9f54-8e1ab315ae53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.244194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "342d32a2-6e30-42d4-9f54-8e1ab315ae53" (UID: "342d32a2-6e30-42d4-9f54-8e1ab315ae53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.305035 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.305077 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwtqg\" (UniqueName: \"kubernetes.io/projected/342d32a2-6e30-42d4-9f54-8e1ab315ae53-kube-api-access-wwtqg\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.305087 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342d32a2-6e30-42d4-9f54-8e1ab315ae53-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.559326 4985 generic.go:334] "Generic (PLEG): container finished" podID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" exitCode=0 Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.559414 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.559466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342d32a2-6e30-42d4-9f54-8e1ab315ae53","Type":"ContainerDied","Data":"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda"} Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.559587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342d32a2-6e30-42d4-9f54-8e1ab315ae53","Type":"ContainerDied","Data":"470513bb92e47c0580e747ca6a2e5faf49c1705866f1edb3b8921caed63d9cdf"} Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.559621 4985 scope.go:117] "RemoveContainer" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.564629 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerStarted","Data":"29ecb5f13cbff4464adc52fc4d89a3390ca4788c971de74be30de4377341c7e5"} Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.618250 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.628364 4985 scope.go:117] "RemoveContainer" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" Jan 27 09:14:39 crc kubenswrapper[4985]: E0127 09:14:39.628976 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda\": container with ID starting with 159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda not found: ID does not exist" containerID="159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.629039 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda"} err="failed to get container status \"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda\": rpc error: code = NotFound desc = could not find container \"159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda\": container with ID starting with 159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda not found: ID does not exist" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.648668 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.675643 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:39 crc kubenswrapper[4985]: E0127 09:14:39.676100 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerName="nova-scheduler-scheduler" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.676116 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerName="nova-scheduler-scheduler" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.676354 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" containerName="nova-scheduler-scheduler" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.677132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.679839 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.698898 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.814919 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.815363 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.815413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnl97\" (UniqueName: \"kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.916770 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.916867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.916917 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnl97\" (UniqueName: \"kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.923640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.923655 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.936639 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnl97\" (UniqueName: \"kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97\") pod \"nova-scheduler-0\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " pod="openstack/nova-scheduler-0" Jan 27 09:14:39 crc kubenswrapper[4985]: I0127 09:14:39.996366 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.148923 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222386 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222793 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.222885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldzjv\" (UniqueName: \"kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv\") pod \"58162a9a-ce9b-41af-a664-a360c97d40af\" (UID: \"58162a9a-ce9b-41af-a664-a360c97d40af\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.225329 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs" (OuterVolumeSpecName: "logs") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.229099 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts" (OuterVolumeSpecName: "scripts") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.229287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv" (OuterVolumeSpecName: "kube-api-access-ldzjv") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "kube-api-access-ldzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.291883 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.326644 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldzjv\" (UniqueName: \"kubernetes.io/projected/58162a9a-ce9b-41af-a664-a360c97d40af-kube-api-access-ldzjv\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.326683 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58162a9a-ce9b-41af-a664-a360c97d40af-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.326695 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.326705 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.335220 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data" (OuterVolumeSpecName: "config-data") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.426540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.429619 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.433540 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.449330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58162a9a-ce9b-41af-a664-a360c97d40af" (UID: "58162a9a-ce9b-41af-a664-a360c97d40af"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.509946 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342d32a2-6e30-42d4-9f54-8e1ab315ae53" path="/var/lib/kubelet/pods/342d32a2-6e30-42d4-9f54-8e1ab315ae53/volumes" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.531935 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvxz2\" (UniqueName: \"kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2\") pod \"7ac9bfd8-ec34-4938-b325-949459bf4876\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.531980 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs\") pod \"7ac9bfd8-ec34-4938-b325-949459bf4876\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.532121 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle\") pod \"7ac9bfd8-ec34-4938-b325-949459bf4876\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.532261 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data\") pod \"7ac9bfd8-ec34-4938-b325-949459bf4876\" (UID: \"7ac9bfd8-ec34-4938-b325-949459bf4876\") " Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.532734 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.532754 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58162a9a-ce9b-41af-a664-a360c97d40af-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.535863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs" (OuterVolumeSpecName: "logs") pod "7ac9bfd8-ec34-4938-b325-949459bf4876" (UID: "7ac9bfd8-ec34-4938-b325-949459bf4876"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.536270 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2" (OuterVolumeSpecName: "kube-api-access-rvxz2") pod "7ac9bfd8-ec34-4938-b325-949459bf4876" (UID: "7ac9bfd8-ec34-4938-b325-949459bf4876"). InnerVolumeSpecName "kube-api-access-rvxz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.565218 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data" (OuterVolumeSpecName: "config-data") pod "7ac9bfd8-ec34-4938-b325-949459bf4876" (UID: "7ac9bfd8-ec34-4938-b325-949459bf4876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.567585 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ac9bfd8-ec34-4938-b325-949459bf4876" (UID: "7ac9bfd8-ec34-4938-b325-949459bf4876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.582780 4985 generic.go:334] "Generic (PLEG): container finished" podID="58162a9a-ce9b-41af-a664-a360c97d40af" containerID="29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9" exitCode=0 Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.582930 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerDied","Data":"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9"} Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.582932 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-878b56798-5d5wm" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.582995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-878b56798-5d5wm" event={"ID":"58162a9a-ce9b-41af-a664-a360c97d40af","Type":"ContainerDied","Data":"e0ce6034c93bb1ecc63bf1dbfd3f716d58781f5f51e2bd1349f5223d52b4075a"} Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.583024 4985 scope.go:117] "RemoveContainer" containerID="29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.586742 4985 generic.go:334] "Generic (PLEG): container finished" podID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerID="d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449" exitCode=0 Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.586837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerDied","Data":"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449"} Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.586879 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ac9bfd8-ec34-4938-b325-949459bf4876","Type":"ContainerDied","Data":"c496f1ec107c7d4b88cf04362e6a56751ee2134c457d1938c49f81f6b80f862e"} Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.587076 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.592341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerStarted","Data":"4ef2b824f0ab3d1a98d5e133b32d72f98494e3386ccac764737e531cee42dc38"} Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.617740 4985 scope.go:117] "RemoveContainer" containerID="60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.627045 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.634774 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.634820 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvxz2\" (UniqueName: \"kubernetes.io/projected/7ac9bfd8-ec34-4938-b325-949459bf4876-kube-api-access-rvxz2\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.634833 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ac9bfd8-ec34-4938-b325-949459bf4876-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.634844 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac9bfd8-ec34-4938-b325-949459bf4876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.639798 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-878b56798-5d5wm"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.666986 4985 scope.go:117] "RemoveContainer" containerID="29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.668207 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9\": container with ID starting with 29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9 not found: ID does not exist" containerID="29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.668308 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9"} err="failed to get container status \"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9\": rpc error: code = NotFound desc = could not find container \"29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9\": container with ID starting with 29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9 not found: ID does not exist" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.668405 4985 scope.go:117] "RemoveContainer" containerID="60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.677409 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b\": container with ID starting with 60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b not found: ID does not exist" containerID="60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.677458 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b"} err="failed to get container status \"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b\": rpc error: code = NotFound desc = could not find container \"60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b\": container with ID starting with 60714b166630259b4fe9959b4800cb8c86d01ddeb9ab678ec785befc6efa377b not found: ID does not exist" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.677488 4985 scope.go:117] "RemoveContainer" containerID="d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.678364 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.690705 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.701700 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.711918 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.712416 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-api" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712439 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-api" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.712459 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-api" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712466 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-api" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.712473 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-log" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712479 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-log" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.712507 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-log" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712532 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-log" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712748 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-log" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712765 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-log" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712785 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" containerName="nova-api-api" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.712802 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" containerName="placement-api" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.714031 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.716793 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.717558 4985 scope.go:117] "RemoveContainer" containerID="528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.724863 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.752006 4985 scope.go:117] "RemoveContainer" containerID="d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.753285 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449\": container with ID starting with d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449 not found: ID does not exist" containerID="d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.753343 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449"} err="failed to get container status \"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449\": rpc error: code = NotFound desc = could not find container \"d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449\": container with ID starting with d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449 not found: ID does not exist" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.753375 4985 scope.go:117] "RemoveContainer" containerID="528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1" Jan 27 09:14:40 crc kubenswrapper[4985]: E0127 09:14:40.753838 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1\": container with ID starting with 528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1 not found: ID does not exist" containerID="528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.753928 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1"} err="failed to get container status \"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1\": rpc error: code = NotFound desc = could not find container \"528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1\": container with ID starting with 528bc39758c1ed03e282630e819240cf408c855d33627e09fa58a29f282d7dd1 not found: ID does not exist" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.838907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.838982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.839015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.839336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.942060 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.942179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.942216 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.942238 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.942805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.947111 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.948140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:40 crc kubenswrapper[4985]: I0127 09:14:40.966208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg\") pod \"nova-api-0\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " pod="openstack/nova-api-0" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.040497 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.503925 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.610443 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerStarted","Data":"c02e9bc8d11a5a3e8d3b660efb03e6edd0a78117da4d213f61d76a06b8090578"} Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.611595 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.616961 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerStarted","Data":"3925991f1876e0bd3ce48ec06733193bd7a84c9fc7bda2dce2da1e9c212003f2"} Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.622990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb0837a9-681b-4ed4-bd39-df8ee55a7037","Type":"ContainerStarted","Data":"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35"} Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.623042 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb0837a9-681b-4ed4-bd39-df8ee55a7037","Type":"ContainerStarted","Data":"1d94eb7fd34f97ba24914bf3463f7cfc3b8e4f1b8d13b3429b6ad37a208206e2"} Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.642151 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.362729573 podStartE2EDuration="5.642129473s" podCreationTimestamp="2026-01-27 09:14:36 +0000 UTC" firstStartedPulling="2026-01-27 09:14:37.681221152 +0000 UTC m=+1261.972315993" lastFinishedPulling="2026-01-27 09:14:40.960621052 +0000 UTC m=+1265.251715893" observedRunningTime="2026-01-27 09:14:41.636883119 +0000 UTC m=+1265.927977970" watchObservedRunningTime="2026-01-27 09:14:41.642129473 +0000 UTC m=+1265.933224324" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.667212 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.66719366 podStartE2EDuration="2.66719366s" podCreationTimestamp="2026-01-27 09:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:41.66426401 +0000 UTC m=+1265.955358851" watchObservedRunningTime="2026-01-27 09:14:41.66719366 +0000 UTC m=+1265.958288531" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.828594 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.829033 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.829088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.830304 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.830369 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f" gracePeriod=600 Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.943663 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:41 crc kubenswrapper[4985]: I0127 09:14:41.944255 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.463553 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58162a9a-ce9b-41af-a664-a360c97d40af" path="/var/lib/kubelet/pods/58162a9a-ce9b-41af-a664-a360c97d40af/volumes" Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.464530 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac9bfd8-ec34-4938-b325-949459bf4876" path="/var/lib/kubelet/pods/7ac9bfd8-ec34-4938-b325-949459bf4876/volumes" Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.633419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerStarted","Data":"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9"} Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.633474 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerStarted","Data":"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5"} Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.637778 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f" exitCode=0 Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.637876 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f"} Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.637974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129"} Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.637995 4985 scope.go:117] "RemoveContainer" containerID="b9c506eebfd71669bdc5889fb3856b5801f49a73fb4a1c7c6112e1365072bb8b" Jan 27 09:14:42 crc kubenswrapper[4985]: I0127 09:14:42.661812 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.661791388 podStartE2EDuration="2.661791388s" podCreationTimestamp="2026-01-27 09:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:42.652476143 +0000 UTC m=+1266.943570984" watchObservedRunningTime="2026-01-27 09:14:42.661791388 +0000 UTC m=+1266.952886229" Jan 27 09:14:43 crc kubenswrapper[4985]: I0127 09:14:43.913303 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 09:14:44 crc kubenswrapper[4985]: I0127 09:14:44.996898 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:14:46 crc kubenswrapper[4985]: I0127 09:14:46.943720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:14:46 crc kubenswrapper[4985]: I0127 09:14:46.944094 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:14:47 crc kubenswrapper[4985]: I0127 09:14:47.955683 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:47 crc kubenswrapper[4985]: I0127 09:14:47.955727 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:49 crc kubenswrapper[4985]: I0127 09:14:49.996923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:14:50 crc kubenswrapper[4985]: I0127 09:14:50.024920 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:14:50 crc kubenswrapper[4985]: I0127 09:14:50.773135 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:14:51 crc kubenswrapper[4985]: I0127 09:14:51.042642 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:14:51 crc kubenswrapper[4985]: I0127 09:14:51.042705 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:14:52 crc kubenswrapper[4985]: I0127 09:14:52.125767 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:52 crc kubenswrapper[4985]: I0127 09:14:52.126620 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:14:56 crc kubenswrapper[4985]: I0127 09:14:56.949488 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:14:56 crc kubenswrapper[4985]: I0127 09:14:56.950836 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:14:56 crc kubenswrapper[4985]: I0127 09:14:56.955393 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:14:57 crc kubenswrapper[4985]: W0127 09:14:57.377372 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342d32a2_6e30_42d4_9f54_8e1ab315ae53.slice/crio-159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda.scope WatchSource:0}: Error finding container 159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda: Status 404 returned error can't find the container with id 159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda Jan 27 09:14:57 crc kubenswrapper[4985]: W0127 09:14:57.378196 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342d32a2_6e30_42d4_9f54_8e1ab315ae53.slice/crio-470513bb92e47c0580e747ca6a2e5faf49c1705866f1edb3b8921caed63d9cdf WatchSource:0}: Error finding container 470513bb92e47c0580e747ca6a2e5faf49c1705866f1edb3b8921caed63d9cdf: Status 404 returned error can't find the container with id 470513bb92e47c0580e747ca6a2e5faf49c1705866f1edb3b8921caed63d9cdf Jan 27 09:14:57 crc kubenswrapper[4985]: W0127 09:14:57.385281 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9bfd8_ec34_4938_b325_949459bf4876.slice/crio-c496f1ec107c7d4b88cf04362e6a56751ee2134c457d1938c49f81f6b80f862e WatchSource:0}: Error finding container c496f1ec107c7d4b88cf04362e6a56751ee2134c457d1938c49f81f6b80f862e: Status 404 returned error can't find the container with id c496f1ec107c7d4b88cf04362e6a56751ee2134c457d1938c49f81f6b80f862e Jan 27 09:14:57 crc kubenswrapper[4985]: W0127 09:14:57.386841 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9bfd8_ec34_4938_b325_949459bf4876.slice/crio-d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449.scope WatchSource:0}: Error finding container d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449: Status 404 returned error can't find the container with id d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449 Jan 27 09:14:57 crc kubenswrapper[4985]: W0127 09:14:57.392842 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1304eb5b_330b_4480_99e8_4e0389cac214.slice/crio-06a40ee2908dca5d24e56e88d206a8826f5aebcbaf0eba19bdb8bfd6ccdadba9 WatchSource:0}: Error finding container 06a40ee2908dca5d24e56e88d206a8826f5aebcbaf0eba19bdb8bfd6ccdadba9: Status 404 returned error can't find the container with id 06a40ee2908dca5d24e56e88d206a8826f5aebcbaf0eba19bdb8bfd6ccdadba9 Jan 27 09:14:57 crc kubenswrapper[4985]: E0127 09:14:57.619020 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2548757_fd02_4c5a_9623_0b1148405dc9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9edf8e3a_1b54_4391_bd6f_fce724acd66b.slice/crio-407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2548757_fd02_4c5a_9623_0b1148405dc9.slice/crio-bff79c0552c944d46f48f254532bb01f0f90751b03fed3e8828cdf836253af0e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58162a9a_ce9b_41af_a664_a360c97d40af.slice/crio-29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342d32a2_6e30_42d4_9f54_8e1ab315ae53.slice/crio-conmon-159fa101ae0fded3d47b9cc5e0a54f49851e3c0439e204c1fd3cbfd32d60ffda.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342d32a2_6e30_42d4_9f54_8e1ab315ae53.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9bfd8_ec34_4938_b325_949459bf4876.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9edf8e3a_1b54_4391_bd6f_fce724acd66b.slice/crio-conmon-407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58162a9a_ce9b_41af_a664_a360c97d40af.slice/crio-e0ce6034c93bb1ecc63bf1dbfd3f716d58781f5f51e2bd1349f5223d52b4075a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc066dd2f_48d4_4f4f_935d_0e772678e610.slice/crio-conmon-c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58162a9a_ce9b_41af_a664_a360c97d40af.slice/crio-conmon-29b18a79448943c2f8137a0abf99e12f6721b4815f55d6a2f49ff6ab91b6b1d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc066dd2f_48d4_4f4f_935d_0e772678e610.slice/crio-c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9bfd8_ec34_4938_b325_949459bf4876.slice/crio-conmon-d0a76535a14cf92ddc5d142401811038f0d3605e8b8951847c791b98d40a5449.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58162a9a_ce9b_41af_a664_a360c97d40af.slice\": RecentStats: unable to find data in memory cache]" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.779122 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.789576 4985 generic.go:334] "Generic (PLEG): container finished" podID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" containerID="407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809" exitCode=137 Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.789685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9edf8e3a-1b54-4391-bd6f-fce724acd66b","Type":"ContainerDied","Data":"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809"} Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.789752 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.789782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9edf8e3a-1b54-4391-bd6f-fce724acd66b","Type":"ContainerDied","Data":"00c99b05ed1f17a49ec32d5574937451564a790149fb733b11ed72d65ead00fd"} Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.789824 4985 scope.go:117] "RemoveContainer" containerID="407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.796839 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.821367 4985 scope.go:117] "RemoveContainer" containerID="407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809" Jan 27 09:14:57 crc kubenswrapper[4985]: E0127 09:14:57.823605 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809\": container with ID starting with 407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809 not found: ID does not exist" containerID="407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.823664 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809"} err="failed to get container status \"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809\": rpc error: code = NotFound desc = could not find container \"407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809\": container with ID starting with 407c534bbbc8115511695cf3116bb388a3d0e423e975c5fb96580d44080e3809 not found: ID does not exist" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.915325 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle\") pod \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.915529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data\") pod \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.915716 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6lqw\" (UniqueName: \"kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw\") pod \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\" (UID: \"9edf8e3a-1b54-4391-bd6f-fce724acd66b\") " Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.927799 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw" (OuterVolumeSpecName: "kube-api-access-b6lqw") pod "9edf8e3a-1b54-4391-bd6f-fce724acd66b" (UID: "9edf8e3a-1b54-4391-bd6f-fce724acd66b"). InnerVolumeSpecName "kube-api-access-b6lqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.948791 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data" (OuterVolumeSpecName: "config-data") pod "9edf8e3a-1b54-4391-bd6f-fce724acd66b" (UID: "9edf8e3a-1b54-4391-bd6f-fce724acd66b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:57 crc kubenswrapper[4985]: I0127 09:14:57.951032 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9edf8e3a-1b54-4391-bd6f-fce724acd66b" (UID: "9edf8e3a-1b54-4391-bd6f-fce724acd66b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.018544 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.019042 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6lqw\" (UniqueName: \"kubernetes.io/projected/9edf8e3a-1b54-4391-bd6f-fce724acd66b-kube-api-access-b6lqw\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.019061 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edf8e3a-1b54-4391-bd6f-fce724acd66b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.127617 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.138594 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.151479 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:58 crc kubenswrapper[4985]: E0127 09:14:58.152072 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.152091 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.152779 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.153510 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.156736 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.156787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.159335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.165727 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.222555 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.222623 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtcm\" (UniqueName: \"kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.222665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.222683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.222709 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.324024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.324077 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtcm\" (UniqueName: \"kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.324129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.324154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.324183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.329847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.330392 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.330776 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.333675 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.343577 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtcm\" (UniqueName: \"kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.461812 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9edf8e3a-1b54-4391-bd6f-fce724acd66b" path="/var/lib/kubelet/pods/9edf8e3a-1b54-4391-bd6f-fce724acd66b/volumes" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.471154 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:14:58 crc kubenswrapper[4985]: I0127 09:14:58.910159 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:14:59 crc kubenswrapper[4985]: I0127 09:14:59.830070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e76b26e-1299-417d-8f51-f4c1bef4da0c","Type":"ContainerStarted","Data":"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764"} Jan 27 09:14:59 crc kubenswrapper[4985]: I0127 09:14:59.830118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e76b26e-1299-417d-8f51-f4c1bef4da0c","Type":"ContainerStarted","Data":"68dd8a8e84bde5f83244ee021ac905ddba9a04bc8541f8b6005820bc1ac8e60d"} Jan 27 09:14:59 crc kubenswrapper[4985]: I0127 09:14:59.856095 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.856055811 podStartE2EDuration="1.856055811s" podCreationTimestamp="2026-01-27 09:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:59.850192431 +0000 UTC m=+1284.141287272" watchObservedRunningTime="2026-01-27 09:14:59.856055811 +0000 UTC m=+1284.147150652" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.146825 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482"] Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.151258 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.156218 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482"] Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.156409 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.157847 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.262477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4x2w\" (UniqueName: \"kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.262801 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.263241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.365111 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.365192 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4x2w\" (UniqueName: \"kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.365240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.366947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.373834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.383659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4x2w\" (UniqueName: \"kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w\") pod \"collect-profiles-29491755-bk482\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.477183 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:00 crc kubenswrapper[4985]: I0127 09:15:00.912459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482"] Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.046256 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.046454 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.046903 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.046934 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.049295 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.049436 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.247915 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.250191 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.270694 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388808 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7jm\" (UniqueName: \"kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.388920 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490219 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490292 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7jm\" (UniqueName: \"kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.490372 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.491223 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.491788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.492308 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.493059 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.493569 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.534349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7jm\" (UniqueName: \"kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm\") pod \"dnsmasq-dns-fcd6f8f8f-snltk\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.584262 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.850627 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d32ac8d-acc7-48b6-89cd-5b94e0d94718" containerID="6cc28b170f0dc10d733d622f44ac63e57979ec2286e4c62d79aa61970c7e390f" exitCode=0 Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.850785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" event={"ID":"6d32ac8d-acc7-48b6-89cd-5b94e0d94718","Type":"ContainerDied","Data":"6cc28b170f0dc10d733d622f44ac63e57979ec2286e4c62d79aa61970c7e390f"} Jan 27 09:15:01 crc kubenswrapper[4985]: I0127 09:15:01.851068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" event={"ID":"6d32ac8d-acc7-48b6-89cd-5b94e0d94718","Type":"ContainerStarted","Data":"ea256055197d3a25c3a1ce5d8b803490db416cbfd234ad20e560fd6af40829c8"} Jan 27 09:15:02 crc kubenswrapper[4985]: I0127 09:15:02.104286 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:15:02 crc kubenswrapper[4985]: W0127 09:15:02.111417 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f7f4d89_0251_4299_b1e5_24f0f160ba5c.slice/crio-1c7ffc4e8d7c7dd1290d9f002ffeffc4406746a3e91d1d169aa53fda60b9c2f7 WatchSource:0}: Error finding container 1c7ffc4e8d7c7dd1290d9f002ffeffc4406746a3e91d1d169aa53fda60b9c2f7: Status 404 returned error can't find the container with id 1c7ffc4e8d7c7dd1290d9f002ffeffc4406746a3e91d1d169aa53fda60b9c2f7 Jan 27 09:15:02 crc kubenswrapper[4985]: I0127 09:15:02.865996 4985 generic.go:334] "Generic (PLEG): container finished" podID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerID="edb6a11fc2d4592aaa4b4c491383e589966351075164f9353370937e89ed2102" exitCode=0 Jan 27 09:15:02 crc kubenswrapper[4985]: I0127 09:15:02.866219 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" event={"ID":"7f7f4d89-0251-4299-b1e5-24f0f160ba5c","Type":"ContainerDied","Data":"edb6a11fc2d4592aaa4b4c491383e589966351075164f9353370937e89ed2102"} Jan 27 09:15:02 crc kubenswrapper[4985]: I0127 09:15:02.866670 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" event={"ID":"7f7f4d89-0251-4299-b1e5-24f0f160ba5c","Type":"ContainerStarted","Data":"1c7ffc4e8d7c7dd1290d9f002ffeffc4406746a3e91d1d169aa53fda60b9c2f7"} Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.334808 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.439613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4x2w\" (UniqueName: \"kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w\") pod \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.439734 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume\") pod \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.439874 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume\") pod \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\" (UID: \"6d32ac8d-acc7-48b6-89cd-5b94e0d94718\") " Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.440495 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d32ac8d-acc7-48b6-89cd-5b94e0d94718" (UID: "6d32ac8d-acc7-48b6-89cd-5b94e0d94718"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.444210 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d32ac8d-acc7-48b6-89cd-5b94e0d94718" (UID: "6d32ac8d-acc7-48b6-89cd-5b94e0d94718"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.444490 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w" (OuterVolumeSpecName: "kube-api-access-f4x2w") pod "6d32ac8d-acc7-48b6-89cd-5b94e0d94718" (UID: "6d32ac8d-acc7-48b6-89cd-5b94e0d94718"). InnerVolumeSpecName "kube-api-access-f4x2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.471407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.543633 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4x2w\" (UniqueName: \"kubernetes.io/projected/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-kube-api-access-f4x2w\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.543672 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.543685 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d32ac8d-acc7-48b6-89cd-5b94e0d94718-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.831199 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.831641 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-central-agent" containerID="cri-o://803ce15e6f511ee278d47a58b7973beb894bcfed820b20202797144ceb5c9002" gracePeriod=30 Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.831769 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="sg-core" containerID="cri-o://4ef2b824f0ab3d1a98d5e133b32d72f98494e3386ccac764737e531cee42dc38" gracePeriod=30 Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.831833 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-notification-agent" containerID="cri-o://29ecb5f13cbff4464adc52fc4d89a3390ca4788c971de74be30de4377341c7e5" gracePeriod=30 Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.832029 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="proxy-httpd" containerID="cri-o://c02e9bc8d11a5a3e8d3b660efb03e6edd0a78117da4d213f61d76a06b8090578" gracePeriod=30 Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.875956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" event={"ID":"7f7f4d89-0251-4299-b1e5-24f0f160ba5c","Type":"ContainerStarted","Data":"d696e52692aeff4b4cf49ea8443e8655b2e8d270dace94e86cc0537da252b2c1"} Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.876223 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.881069 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" event={"ID":"6d32ac8d-acc7-48b6-89cd-5b94e0d94718","Type":"ContainerDied","Data":"ea256055197d3a25c3a1ce5d8b803490db416cbfd234ad20e560fd6af40829c8"} Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.881148 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea256055197d3a25c3a1ce5d8b803490db416cbfd234ad20e560fd6af40829c8" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.881244 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-bk482" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.904708 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" podStartSLOduration=2.904675357 podStartE2EDuration="2.904675357s" podCreationTimestamp="2026-01-27 09:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:03.895417603 +0000 UTC m=+1288.186512454" watchObservedRunningTime="2026-01-27 09:15:03.904675357 +0000 UTC m=+1288.195770198" Jan 27 09:15:03 crc kubenswrapper[4985]: I0127 09:15:03.941811 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": read tcp 10.217.0.2:46012->10.217.0.206:3000: read: connection reset by peer" Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.001656 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.002351 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-log" containerID="cri-o://545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5" gracePeriod=30 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.002565 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-api" containerID="cri-o://0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9" gracePeriod=30 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.893650 4985 generic.go:334] "Generic (PLEG): container finished" podID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerID="545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5" exitCode=143 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.893723 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerDied","Data":"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5"} Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.896614 4985 generic.go:334] "Generic (PLEG): container finished" podID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerID="c02e9bc8d11a5a3e8d3b660efb03e6edd0a78117da4d213f61d76a06b8090578" exitCode=0 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.896654 4985 generic.go:334] "Generic (PLEG): container finished" podID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerID="4ef2b824f0ab3d1a98d5e133b32d72f98494e3386ccac764737e531cee42dc38" exitCode=2 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.896663 4985 generic.go:334] "Generic (PLEG): container finished" podID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerID="803ce15e6f511ee278d47a58b7973beb894bcfed820b20202797144ceb5c9002" exitCode=0 Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.897529 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerDied","Data":"c02e9bc8d11a5a3e8d3b660efb03e6edd0a78117da4d213f61d76a06b8090578"} Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.897563 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerDied","Data":"4ef2b824f0ab3d1a98d5e133b32d72f98494e3386ccac764737e531cee42dc38"} Jan 27 09:15:04 crc kubenswrapper[4985]: I0127 09:15:04.897573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerDied","Data":"803ce15e6f511ee278d47a58b7973beb894bcfed820b20202797144ceb5c9002"} Jan 27 09:15:05 crc kubenswrapper[4985]: I0127 09:15:05.911576 4985 generic.go:334] "Generic (PLEG): container finished" podID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerID="29ecb5f13cbff4464adc52fc4d89a3390ca4788c971de74be30de4377341c7e5" exitCode=0 Jan 27 09:15:05 crc kubenswrapper[4985]: I0127 09:15:05.911673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerDied","Data":"29ecb5f13cbff4464adc52fc4d89a3390ca4788c971de74be30de4377341c7e5"} Jan 27 09:15:05 crc kubenswrapper[4985]: I0127 09:15:05.912163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f02831a-c9ec-41ba-aabb-ac9557d82899","Type":"ContainerDied","Data":"4fe5cc291f2eb6395dcd949fcbceafb5bda67a42abb10fc78fecd81e9d993fbb"} Jan 27 09:15:05 crc kubenswrapper[4985]: I0127 09:15:05.912189 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe5cc291f2eb6395dcd949fcbceafb5bda67a42abb10fc78fecd81e9d993fbb" Jan 27 09:15:05 crc kubenswrapper[4985]: I0127 09:15:05.937293 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009159 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009250 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009321 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009355 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfqh\" (UniqueName: \"kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009472 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd\") pod \"1f02831a-c9ec-41ba-aabb-ac9557d82899\" (UID: \"1f02831a-c9ec-41ba-aabb-ac9557d82899\") " Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.009997 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.010839 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.011451 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.011477 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f02831a-c9ec-41ba-aabb-ac9557d82899-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.014861 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh" (OuterVolumeSpecName: "kube-api-access-gnfqh") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "kube-api-access-gnfqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.015220 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts" (OuterVolumeSpecName: "scripts") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.040705 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.088466 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.113881 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.113918 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.113932 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfqh\" (UniqueName: \"kubernetes.io/projected/1f02831a-c9ec-41ba-aabb-ac9557d82899-kube-api-access-gnfqh\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.113962 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.117440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data" (OuterVolumeSpecName: "config-data") pod "1f02831a-c9ec-41ba-aabb-ac9557d82899" (UID: "1f02831a-c9ec-41ba-aabb-ac9557d82899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.215255 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02831a-c9ec-41ba-aabb-ac9557d82899-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.918964 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.945974 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.965301 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.978356 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:06 crc kubenswrapper[4985]: E0127 09:15:06.979100 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-notification-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979130 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-notification-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: E0127 09:15:06.979144 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="proxy-httpd" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979152 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="proxy-httpd" Jan 27 09:15:06 crc kubenswrapper[4985]: E0127 09:15:06.979173 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="sg-core" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979181 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="sg-core" Jan 27 09:15:06 crc kubenswrapper[4985]: E0127 09:15:06.979192 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-central-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979200 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-central-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: E0127 09:15:06.979244 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d32ac8d-acc7-48b6-89cd-5b94e0d94718" containerName="collect-profiles" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979251 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d32ac8d-acc7-48b6-89cd-5b94e0d94718" containerName="collect-profiles" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979593 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-notification-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979617 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="ceilometer-central-agent" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979636 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="proxy-httpd" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979659 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d32ac8d-acc7-48b6-89cd-5b94e0d94718" containerName="collect-profiles" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.979669 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" containerName="sg-core" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.985160 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.988162 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.989330 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:06 crc kubenswrapper[4985]: I0127 09:15:06.992233 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031349 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031573 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031609 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tss2s\" (UniqueName: \"kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.031696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134254 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134275 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tss2s\" (UniqueName: \"kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134299 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.134421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.136165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.136462 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.139939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.140329 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.147398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.151361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.156207 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tss2s\" (UniqueName: \"kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s\") pod \"ceilometer-0\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.304612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.538406 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.644839 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs\") pod \"20d90afd-a50e-4295-b31b-7d9c05e358f6\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.644899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data\") pod \"20d90afd-a50e-4295-b31b-7d9c05e358f6\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.645693 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg\") pod \"20d90afd-a50e-4295-b31b-7d9c05e358f6\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.645816 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle\") pod \"20d90afd-a50e-4295-b31b-7d9c05e358f6\" (UID: \"20d90afd-a50e-4295-b31b-7d9c05e358f6\") " Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.646277 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs" (OuterVolumeSpecName: "logs") pod "20d90afd-a50e-4295-b31b-7d9c05e358f6" (UID: "20d90afd-a50e-4295-b31b-7d9c05e358f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.653751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg" (OuterVolumeSpecName: "kube-api-access-5cqfg") pod "20d90afd-a50e-4295-b31b-7d9c05e358f6" (UID: "20d90afd-a50e-4295-b31b-7d9c05e358f6"). InnerVolumeSpecName "kube-api-access-5cqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.708837 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20d90afd-a50e-4295-b31b-7d9c05e358f6" (UID: "20d90afd-a50e-4295-b31b-7d9c05e358f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.708955 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data" (OuterVolumeSpecName: "config-data") pod "20d90afd-a50e-4295-b31b-7d9c05e358f6" (UID: "20d90afd-a50e-4295-b31b-7d9c05e358f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.748350 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.748386 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d90afd-a50e-4295-b31b-7d9c05e358f6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.748396 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d90afd-a50e-4295-b31b-7d9c05e358f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.748409 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/20d90afd-a50e-4295-b31b-7d9c05e358f6-kube-api-access-5cqfg\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.837698 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:07 crc kubenswrapper[4985]: W0127 09:15:07.863427 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0afade_f6ae_47f6_9977_9f3f0201fd4c.slice/crio-17b6706407c7f288621d28ad7331f68bab8428de8d405ea49543c999500b4a39 WatchSource:0}: Error finding container 17b6706407c7f288621d28ad7331f68bab8428de8d405ea49543c999500b4a39: Status 404 returned error can't find the container with id 17b6706407c7f288621d28ad7331f68bab8428de8d405ea49543c999500b4a39 Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.930151 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerStarted","Data":"17b6706407c7f288621d28ad7331f68bab8428de8d405ea49543c999500b4a39"} Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.934284 4985 generic.go:334] "Generic (PLEG): container finished" podID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerID="0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9" exitCode=0 Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.934323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerDied","Data":"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9"} Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.934351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20d90afd-a50e-4295-b31b-7d9c05e358f6","Type":"ContainerDied","Data":"3925991f1876e0bd3ce48ec06733193bd7a84c9fc7bda2dce2da1e9c212003f2"} Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.934373 4985 scope.go:117] "RemoveContainer" containerID="0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.934574 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.980693 4985 scope.go:117] "RemoveContainer" containerID="545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5" Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.985310 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:07 crc kubenswrapper[4985]: I0127 09:15:07.998706 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.011138 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:08 crc kubenswrapper[4985]: E0127 09:15:08.011686 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-api" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.011705 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-api" Jan 27 09:15:08 crc kubenswrapper[4985]: E0127 09:15:08.011740 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-log" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.011747 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-log" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.011924 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-log" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.011954 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" containerName="nova-api-api" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.012893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.019874 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.020094 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.020362 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.020563 4985 scope.go:117] "RemoveContainer" containerID="0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9" Jan 27 09:15:08 crc kubenswrapper[4985]: E0127 09:15:08.021084 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9\": container with ID starting with 0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9 not found: ID does not exist" containerID="0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.021142 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9"} err="failed to get container status \"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9\": rpc error: code = NotFound desc = could not find container \"0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9\": container with ID starting with 0882068f7f36ddc7a1d6c501da09975c4e3e48ea22c8290253027122f3fd03b9 not found: ID does not exist" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.021176 4985 scope.go:117] "RemoveContainer" containerID="545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5" Jan 27 09:15:08 crc kubenswrapper[4985]: E0127 09:15:08.021766 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5\": container with ID starting with 545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5 not found: ID does not exist" containerID="545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.021810 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5"} err="failed to get container status \"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5\": rpc error: code = NotFound desc = could not find container \"545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5\": container with ID starting with 545aa7cb487daa1963918ad170ed504a432e451c6de04b236dc6776a4fc256a5 not found: ID does not exist" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.032058 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065152 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065191 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065247 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065329 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.065447 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgn9\" (UniqueName: \"kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.167680 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.167807 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.167921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.167998 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgn9\" (UniqueName: \"kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.168097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.168194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.169307 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.173689 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.174177 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.174269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.175049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.189011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgn9\" (UniqueName: \"kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9\") pod \"nova-api-0\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.345173 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.486284 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f02831a-c9ec-41ba-aabb-ac9557d82899" path="/var/lib/kubelet/pods/1f02831a-c9ec-41ba-aabb-ac9557d82899/volumes" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.487949 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d90afd-a50e-4295-b31b-7d9c05e358f6" path="/var/lib/kubelet/pods/20d90afd-a50e-4295-b31b-7d9c05e358f6/volumes" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.488923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:08 crc kubenswrapper[4985]: I0127 09:15:08.499084 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:08.908643 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:08.949751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerStarted","Data":"88c4084f5edd5eb91f9efb49dad6723a5d4cfffe6ab0a8c7d3a744f7abee2c8a"} Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:08.953539 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerStarted","Data":"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb"} Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:08.975809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.238764 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-crzqt"] Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.240169 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.242730 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.243118 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.258096 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-crzqt"] Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.412627 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.412711 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whnk2\" (UniqueName: \"kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.412757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.413367 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.515847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.516710 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.516782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whnk2\" (UniqueName: \"kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.516809 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.522477 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.527409 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.530255 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.540329 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whnk2\" (UniqueName: \"kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2\") pod \"nova-cell1-cell-mapping-crzqt\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.606106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.977483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerStarted","Data":"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8"} Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.978092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerStarted","Data":"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc"} Jan 27 09:15:09 crc kubenswrapper[4985]: I0127 09:15:09.987039 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerStarted","Data":"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968"} Jan 27 09:15:10 crc kubenswrapper[4985]: I0127 09:15:10.017893 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.017868405 podStartE2EDuration="3.017868405s" podCreationTimestamp="2026-01-27 09:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:10.008673973 +0000 UTC m=+1294.299768814" watchObservedRunningTime="2026-01-27 09:15:10.017868405 +0000 UTC m=+1294.308963246" Jan 27 09:15:10 crc kubenswrapper[4985]: I0127 09:15:10.102882 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-crzqt"] Jan 27 09:15:10 crc kubenswrapper[4985]: W0127 09:15:10.107751 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf83a7b31_1947_4c74_8770_86b7a6906c1b.slice/crio-cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040 WatchSource:0}: Error finding container cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040: Status 404 returned error can't find the container with id cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040 Jan 27 09:15:10 crc kubenswrapper[4985]: I0127 09:15:10.995059 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerStarted","Data":"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407"} Jan 27 09:15:10 crc kubenswrapper[4985]: I0127 09:15:10.998636 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crzqt" event={"ID":"f83a7b31-1947-4c74-8770-86b7a6906c1b","Type":"ContainerStarted","Data":"2586e2ec8d0c89c8c74495be9ac91f7f21d31e3e75107fcecb56959458091589"} Jan 27 09:15:10 crc kubenswrapper[4985]: I0127 09:15:10.998673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crzqt" event={"ID":"f83a7b31-1947-4c74-8770-86b7a6906c1b","Type":"ContainerStarted","Data":"cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040"} Jan 27 09:15:11 crc kubenswrapper[4985]: I0127 09:15:11.586524 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:15:11 crc kubenswrapper[4985]: I0127 09:15:11.621963 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-crzqt" podStartSLOduration=2.621918327 podStartE2EDuration="2.621918327s" podCreationTimestamp="2026-01-27 09:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:11.026960961 +0000 UTC m=+1295.318055802" watchObservedRunningTime="2026-01-27 09:15:11.621918327 +0000 UTC m=+1295.913013158" Jan 27 09:15:11 crc kubenswrapper[4985]: I0127 09:15:11.646148 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:15:11 crc kubenswrapper[4985]: I0127 09:15:11.646404 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="dnsmasq-dns" containerID="cri-o://3457662f6fff19e6fc4a64cc3c356cc797c57d34968aaac15edcb4be4a5e0e99" gracePeriod=10 Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.019022 4985 generic.go:334] "Generic (PLEG): container finished" podID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerID="3457662f6fff19e6fc4a64cc3c356cc797c57d34968aaac15edcb4be4a5e0e99" exitCode=0 Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.019226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" event={"ID":"d5758db5-8df4-4e50-a1b0-71ea5996f09a","Type":"ContainerDied","Data":"3457662f6fff19e6fc4a64cc3c356cc797c57d34968aaac15edcb4be4a5e0e99"} Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.024457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerStarted","Data":"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8"} Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.024540 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.064651 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.61538526 podStartE2EDuration="6.064632359s" podCreationTimestamp="2026-01-27 09:15:06 +0000 UTC" firstStartedPulling="2026-01-27 09:15:07.866759509 +0000 UTC m=+1292.157854350" lastFinishedPulling="2026-01-27 09:15:11.316006608 +0000 UTC m=+1295.607101449" observedRunningTime="2026-01-27 09:15:12.050116431 +0000 UTC m=+1296.341211262" watchObservedRunningTime="2026-01-27 09:15:12.064632359 +0000 UTC m=+1296.355727200" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.215504 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.379483 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.379602 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.379772 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.379848 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.380018 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw4jq\" (UniqueName: \"kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.380094 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb\") pod \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\" (UID: \"d5758db5-8df4-4e50-a1b0-71ea5996f09a\") " Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.388082 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq" (OuterVolumeSpecName: "kube-api-access-cw4jq") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "kube-api-access-cw4jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.444182 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.448902 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.457538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.461163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.462446 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config" (OuterVolumeSpecName: "config") pod "d5758db5-8df4-4e50-a1b0-71ea5996f09a" (UID: "d5758db5-8df4-4e50-a1b0-71ea5996f09a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483283 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483351 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483365 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw4jq\" (UniqueName: \"kubernetes.io/projected/d5758db5-8df4-4e50-a1b0-71ea5996f09a-kube-api-access-cw4jq\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483381 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483390 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:12 crc kubenswrapper[4985]: I0127 09:15:12.483399 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5758db5-8df4-4e50-a1b0-71ea5996f09a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.038539 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" event={"ID":"d5758db5-8df4-4e50-a1b0-71ea5996f09a","Type":"ContainerDied","Data":"3d19929d43f77ca97fdea73590bf0718dc7eda8e0f399a9935c6401ba7abcd25"} Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.038593 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-hdgcj" Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.039252 4985 scope.go:117] "RemoveContainer" containerID="3457662f6fff19e6fc4a64cc3c356cc797c57d34968aaac15edcb4be4a5e0e99" Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.070583 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.071911 4985 scope.go:117] "RemoveContainer" containerID="f2b7ab185b897b0e9da8210682a09566387e587c9d1b2294e8e1840ac2039731" Jan 27 09:15:13 crc kubenswrapper[4985]: I0127 09:15:13.082646 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-hdgcj"] Jan 27 09:15:14 crc kubenswrapper[4985]: I0127 09:15:14.471927 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" path="/var/lib/kubelet/pods/d5758db5-8df4-4e50-a1b0-71ea5996f09a/volumes" Jan 27 09:15:16 crc kubenswrapper[4985]: I0127 09:15:16.087688 4985 generic.go:334] "Generic (PLEG): container finished" podID="f83a7b31-1947-4c74-8770-86b7a6906c1b" containerID="2586e2ec8d0c89c8c74495be9ac91f7f21d31e3e75107fcecb56959458091589" exitCode=0 Jan 27 09:15:16 crc kubenswrapper[4985]: I0127 09:15:16.087885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crzqt" event={"ID":"f83a7b31-1947-4c74-8770-86b7a6906c1b","Type":"ContainerDied","Data":"2586e2ec8d0c89c8c74495be9ac91f7f21d31e3e75107fcecb56959458091589"} Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.518333 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.618050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data\") pod \"f83a7b31-1947-4c74-8770-86b7a6906c1b\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.618304 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts\") pod \"f83a7b31-1947-4c74-8770-86b7a6906c1b\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.618432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle\") pod \"f83a7b31-1947-4c74-8770-86b7a6906c1b\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.618458 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whnk2\" (UniqueName: \"kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2\") pod \"f83a7b31-1947-4c74-8770-86b7a6906c1b\" (UID: \"f83a7b31-1947-4c74-8770-86b7a6906c1b\") " Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.626390 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts" (OuterVolumeSpecName: "scripts") pod "f83a7b31-1947-4c74-8770-86b7a6906c1b" (UID: "f83a7b31-1947-4c74-8770-86b7a6906c1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.627167 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2" (OuterVolumeSpecName: "kube-api-access-whnk2") pod "f83a7b31-1947-4c74-8770-86b7a6906c1b" (UID: "f83a7b31-1947-4c74-8770-86b7a6906c1b"). InnerVolumeSpecName "kube-api-access-whnk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.652492 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f83a7b31-1947-4c74-8770-86b7a6906c1b" (UID: "f83a7b31-1947-4c74-8770-86b7a6906c1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.671550 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data" (OuterVolumeSpecName: "config-data") pod "f83a7b31-1947-4c74-8770-86b7a6906c1b" (UID: "f83a7b31-1947-4c74-8770-86b7a6906c1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.721190 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.721230 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whnk2\" (UniqueName: \"kubernetes.io/projected/f83a7b31-1947-4c74-8770-86b7a6906c1b-kube-api-access-whnk2\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.721245 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:17 crc kubenswrapper[4985]: I0127 09:15:17.721255 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f83a7b31-1947-4c74-8770-86b7a6906c1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.108704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crzqt" event={"ID":"f83a7b31-1947-4c74-8770-86b7a6906c1b","Type":"ContainerDied","Data":"cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040"} Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.108989 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb874c0909b091017a48cc761d1a303a17850d6bfd0c877ac4385032f0aa0040" Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.108733 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crzqt" Jan 27 09:15:18 crc kubenswrapper[4985]: E0127 09:15:18.261537 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf83a7b31_1947_4c74_8770_86b7a6906c1b.slice\": RecentStats: unable to find data in memory cache]" Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.319668 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.320039 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-log" containerID="cri-o://7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" gracePeriod=30 Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.320480 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-api" containerID="cri-o://1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" gracePeriod=30 Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.327944 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.328190 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" containerName="nova-scheduler-scheduler" containerID="cri-o://dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35" gracePeriod=30 Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.381486 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.382103 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" containerID="cri-o://62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c" gracePeriod=30 Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.382721 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" containerID="cri-o://114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc" gracePeriod=30 Jan 27 09:15:18 crc kubenswrapper[4985]: I0127 09:15:18.923404 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.049988 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.050138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.050208 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.050274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.050425 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs" (OuterVolumeSpecName: "logs") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.051234 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.053123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgn9\" (UniqueName: \"kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9\") pod \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\" (UID: \"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.054931 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.057995 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9" (OuterVolumeSpecName: "kube-api-access-zlgn9") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "kube-api-access-zlgn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.092267 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.093229 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data" (OuterVolumeSpecName: "config-data") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.122482 4985 generic.go:334] "Generic (PLEG): container finished" podID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerID="62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c" exitCode=143 Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.122647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerDied","Data":"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c"} Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.125531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.125863 4985 generic.go:334] "Generic (PLEG): container finished" podID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerID="1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" exitCode=0 Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.125924 4985 generic.go:334] "Generic (PLEG): container finished" podID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerID="7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" exitCode=143 Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.125958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerDied","Data":"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8"} Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.126008 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerDied","Data":"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc"} Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.126022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9","Type":"ContainerDied","Data":"88c4084f5edd5eb91f9efb49dad6723a5d4cfffe6ab0a8c7d3a744f7abee2c8a"} Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.126047 4985 scope.go:117] "RemoveContainer" containerID="1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.126127 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.127188 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" (UID: "9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.157453 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.157496 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.157505 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.157736 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.157796 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgn9\" (UniqueName: \"kubernetes.io/projected/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9-kube-api-access-zlgn9\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.160846 4985 scope.go:117] "RemoveContainer" containerID="7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.183186 4985 scope.go:117] "RemoveContainer" containerID="1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.183680 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8\": container with ID starting with 1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8 not found: ID does not exist" containerID="1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.183712 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8"} err="failed to get container status \"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8\": rpc error: code = NotFound desc = could not find container \"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8\": container with ID starting with 1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8 not found: ID does not exist" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.183733 4985 scope.go:117] "RemoveContainer" containerID="7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.184080 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc\": container with ID starting with 7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc not found: ID does not exist" containerID="7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.184104 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc"} err="failed to get container status \"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc\": rpc error: code = NotFound desc = could not find container \"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc\": container with ID starting with 7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc not found: ID does not exist" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.184121 4985 scope.go:117] "RemoveContainer" containerID="1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.184349 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8"} err="failed to get container status \"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8\": rpc error: code = NotFound desc = could not find container \"1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8\": container with ID starting with 1fd2eca80ebb650590ae4b5d9fe41917ad24c37d671567d88a55b65e2a9d98c8 not found: ID does not exist" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.184376 4985 scope.go:117] "RemoveContainer" containerID="7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.184675 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc"} err="failed to get container status \"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc\": rpc error: code = NotFound desc = could not find container \"7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc\": container with ID starting with 7fab34aa9c4d9a5813fb43d34972fd3cb3721f2d7b7dbd4193a1de27d87848cc not found: ID does not exist" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.569693 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.584475 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599059 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.599674 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="dnsmasq-dns" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599694 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="dnsmasq-dns" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.599718 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83a7b31-1947-4c74-8770-86b7a6906c1b" containerName="nova-manage" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599726 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83a7b31-1947-4c74-8770-86b7a6906c1b" containerName="nova-manage" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.599751 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="init" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599758 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="init" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.599781 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-api" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599788 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-api" Jan 27 09:15:19 crc kubenswrapper[4985]: E0127 09:15:19.599803 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-log" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.599811 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-log" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.600031 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-api" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.600052 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5758db5-8df4-4e50-a1b0-71ea5996f09a" containerName="dnsmasq-dns" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.600071 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f83a7b31-1947-4c74-8770-86b7a6906c1b" containerName="nova-manage" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.600099 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" containerName="nova-api-log" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.601342 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.605260 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.605792 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.606007 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.609761 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671247 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklfk\" (UniqueName: \"kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671415 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.671600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.775397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklfk\" (UniqueName: \"kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.775849 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.775900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.776039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.776075 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.776129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.776453 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.781425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.781776 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.783537 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.783760 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.795474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklfk\" (UniqueName: \"kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk\") pod \"nova-api-0\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.882591 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.931895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.981180 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle\") pod \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.981307 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data\") pod \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.981557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnl97\" (UniqueName: \"kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97\") pod \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\" (UID: \"fb0837a9-681b-4ed4-bd39-df8ee55a7037\") " Jan 27 09:15:19 crc kubenswrapper[4985]: I0127 09:15:19.987246 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97" (OuterVolumeSpecName: "kube-api-access-vnl97") pod "fb0837a9-681b-4ed4-bd39-df8ee55a7037" (UID: "fb0837a9-681b-4ed4-bd39-df8ee55a7037"). InnerVolumeSpecName "kube-api-access-vnl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.013056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb0837a9-681b-4ed4-bd39-df8ee55a7037" (UID: "fb0837a9-681b-4ed4-bd39-df8ee55a7037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.014907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data" (OuterVolumeSpecName: "config-data") pod "fb0837a9-681b-4ed4-bd39-df8ee55a7037" (UID: "fb0837a9-681b-4ed4-bd39-df8ee55a7037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.085036 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnl97\" (UniqueName: \"kubernetes.io/projected/fb0837a9-681b-4ed4-bd39-df8ee55a7037-kube-api-access-vnl97\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.085099 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.085118 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0837a9-681b-4ed4-bd39-df8ee55a7037-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.146929 4985 generic.go:334] "Generic (PLEG): container finished" podID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" containerID="dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35" exitCode=0 Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.147025 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb0837a9-681b-4ed4-bd39-df8ee55a7037","Type":"ContainerDied","Data":"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35"} Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.147066 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb0837a9-681b-4ed4-bd39-df8ee55a7037","Type":"ContainerDied","Data":"1d94eb7fd34f97ba24914bf3463f7cfc3b8e4f1b8d13b3429b6ad37a208206e2"} Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.147088 4985 scope.go:117] "RemoveContainer" containerID="dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.147243 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.200421 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.201282 4985 scope.go:117] "RemoveContainer" containerID="dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35" Jan 27 09:15:20 crc kubenswrapper[4985]: E0127 09:15:20.202080 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35\": container with ID starting with dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35 not found: ID does not exist" containerID="dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.202145 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35"} err="failed to get container status \"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35\": rpc error: code = NotFound desc = could not find container \"dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35\": container with ID starting with dc7ec5699374b7aabf91f5135d1dd779018a1fe423e190bdba885ab8eae71c35 not found: ID does not exist" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.219978 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.251778 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:20 crc kubenswrapper[4985]: E0127 09:15:20.252259 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" containerName="nova-scheduler-scheduler" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.252281 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" containerName="nova-scheduler-scheduler" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.252477 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" containerName="nova-scheduler-scheduler" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.253554 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.256153 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.265816 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.395825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.395932 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tt6\" (UniqueName: \"kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.396114 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.472125 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9" path="/var/lib/kubelet/pods/9fe72d4a-d8bf-40ef-a7a6-45b6134c4ef9/volumes" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.473731 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0837a9-681b-4ed4-bd39-df8ee55a7037" path="/var/lib/kubelet/pods/fb0837a9-681b-4ed4-bd39-df8ee55a7037/volumes" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.474619 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.499695 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.499791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tt6\" (UniqueName: \"kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.499850 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.508694 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.509574 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.520182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tt6\" (UniqueName: \"kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6\") pod \"nova-scheduler-0\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:20 crc kubenswrapper[4985]: I0127 09:15:20.580576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:21 crc kubenswrapper[4985]: I0127 09:15:21.089974 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:21 crc kubenswrapper[4985]: W0127 09:15:21.092296 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcf1c027_3aed_4213_b1ce_2f9fcab702aa.slice/crio-61a4d9f07378186d764cd8ba37bfd00fcdd5427b907cb1b7721f82c3ba0cd3b9 WatchSource:0}: Error finding container 61a4d9f07378186d764cd8ba37bfd00fcdd5427b907cb1b7721f82c3ba0cd3b9: Status 404 returned error can't find the container with id 61a4d9f07378186d764cd8ba37bfd00fcdd5427b907cb1b7721f82c3ba0cd3b9 Jan 27 09:15:21 crc kubenswrapper[4985]: I0127 09:15:21.168014 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcf1c027-3aed-4213-b1ce-2f9fcab702aa","Type":"ContainerStarted","Data":"61a4d9f07378186d764cd8ba37bfd00fcdd5427b907cb1b7721f82c3ba0cd3b9"} Jan 27 09:15:21 crc kubenswrapper[4985]: I0127 09:15:21.172416 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerStarted","Data":"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d"} Jan 27 09:15:21 crc kubenswrapper[4985]: I0127 09:15:21.172477 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerStarted","Data":"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a"} Jan 27 09:15:21 crc kubenswrapper[4985]: I0127 09:15:21.172497 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerStarted","Data":"cc8449e94045bbfec6f3124281689711dcbba69b90a0f122dde0db1a8e59a7e1"} Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.031102 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.074197 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.074142226 podStartE2EDuration="3.074142226s" podCreationTimestamp="2026-01-27 09:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:21.19699405 +0000 UTC m=+1305.488088901" watchObservedRunningTime="2026-01-27 09:15:22.074142226 +0000 UTC m=+1306.365237067" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.138764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data\") pod \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.138991 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs\") pod \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.139107 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5s5j\" (UniqueName: \"kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j\") pod \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.139223 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs\") pod \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.139295 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle\") pod \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\" (UID: \"e8169c55-2bdc-44a1-b0ea-6ceef864c34e\") " Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.142491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs" (OuterVolumeSpecName: "logs") pod "e8169c55-2bdc-44a1-b0ea-6ceef864c34e" (UID: "e8169c55-2bdc-44a1-b0ea-6ceef864c34e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.151495 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j" (OuterVolumeSpecName: "kube-api-access-r5s5j") pod "e8169c55-2bdc-44a1-b0ea-6ceef864c34e" (UID: "e8169c55-2bdc-44a1-b0ea-6ceef864c34e"). InnerVolumeSpecName "kube-api-access-r5s5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.181077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data" (OuterVolumeSpecName: "config-data") pod "e8169c55-2bdc-44a1-b0ea-6ceef864c34e" (UID: "e8169c55-2bdc-44a1-b0ea-6ceef864c34e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.186904 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8169c55-2bdc-44a1-b0ea-6ceef864c34e" (UID: "e8169c55-2bdc-44a1-b0ea-6ceef864c34e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.196269 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.196293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerDied","Data":"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc"} Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.196348 4985 scope.go:117] "RemoveContainer" containerID="114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.196145 4985 generic.go:334] "Generic (PLEG): container finished" podID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerID="114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc" exitCode=0 Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.196679 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8169c55-2bdc-44a1-b0ea-6ceef864c34e","Type":"ContainerDied","Data":"77c544da084cc426cf03ce5edf9c5585481f58cb21105aae9ebb4e6ec3ab3b02"} Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.199692 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcf1c027-3aed-4213-b1ce-2f9fcab702aa","Type":"ContainerStarted","Data":"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4"} Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.227965 4985 scope.go:117] "RemoveContainer" containerID="62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.230648 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.230626238 podStartE2EDuration="2.230626238s" podCreationTimestamp="2026-01-27 09:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:22.226180686 +0000 UTC m=+1306.517275527" watchObservedRunningTime="2026-01-27 09:15:22.230626238 +0000 UTC m=+1306.521721079" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.242902 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.242944 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5s5j\" (UniqueName: \"kubernetes.io/projected/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-kube-api-access-r5s5j\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.242957 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.242968 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.248146 4985 scope.go:117] "RemoveContainer" containerID="114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc" Jan 27 09:15:22 crc kubenswrapper[4985]: E0127 09:15:22.249615 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc\": container with ID starting with 114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc not found: ID does not exist" containerID="114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.249660 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc"} err="failed to get container status \"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc\": rpc error: code = NotFound desc = could not find container \"114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc\": container with ID starting with 114ffa12ce2a48d84c44d861e3f843b7acd8e064826aa7703202242e39a5abbc not found: ID does not exist" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.249693 4985 scope.go:117] "RemoveContainer" containerID="62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c" Jan 27 09:15:22 crc kubenswrapper[4985]: E0127 09:15:22.251774 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c\": container with ID starting with 62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c not found: ID does not exist" containerID="62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.251846 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c"} err="failed to get container status \"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c\": rpc error: code = NotFound desc = could not find container \"62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c\": container with ID starting with 62ed8d0635a8c5b12a14d3725da2f76ff74c2727c96fcf406f16fbda0558324c not found: ID does not exist" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.255693 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e8169c55-2bdc-44a1-b0ea-6ceef864c34e" (UID: "e8169c55-2bdc-44a1-b0ea-6ceef864c34e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.349633 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8169c55-2bdc-44a1-b0ea-6ceef864c34e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.523966 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.534215 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.548841 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:22 crc kubenswrapper[4985]: E0127 09:15:22.549431 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.549520 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" Jan 27 09:15:22 crc kubenswrapper[4985]: E0127 09:15:22.549564 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.549572 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.549749 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.549787 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.557139 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.561722 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.561930 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.608769 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.660906 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.660972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.661148 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.661181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljcj\" (UniqueName: \"kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.661199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.763587 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.763649 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljcj\" (UniqueName: \"kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.763672 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.763715 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.763737 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.764203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.771637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.771651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.773235 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.789325 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljcj\" (UniqueName: \"kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj\") pod \"nova-metadata-0\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " pod="openstack/nova-metadata-0" Jan 27 09:15:22 crc kubenswrapper[4985]: I0127 09:15:22.886884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:23 crc kubenswrapper[4985]: W0127 09:15:23.211327 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb7cf54_ef5f_41f6_b383_48b387842365.slice/crio-50fc992c07f5e46a017cf30bcf0a32ae01fb85ae9acc88ea02a16e5b8f88424d WatchSource:0}: Error finding container 50fc992c07f5e46a017cf30bcf0a32ae01fb85ae9acc88ea02a16e5b8f88424d: Status 404 returned error can't find the container with id 50fc992c07f5e46a017cf30bcf0a32ae01fb85ae9acc88ea02a16e5b8f88424d Jan 27 09:15:23 crc kubenswrapper[4985]: I0127 09:15:23.223332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:24 crc kubenswrapper[4985]: I0127 09:15:24.229174 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerStarted","Data":"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145"} Jan 27 09:15:24 crc kubenswrapper[4985]: I0127 09:15:24.229687 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerStarted","Data":"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f"} Jan 27 09:15:24 crc kubenswrapper[4985]: I0127 09:15:24.229699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerStarted","Data":"50fc992c07f5e46a017cf30bcf0a32ae01fb85ae9acc88ea02a16e5b8f88424d"} Jan 27 09:15:24 crc kubenswrapper[4985]: I0127 09:15:24.262794 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.26277004 podStartE2EDuration="2.26277004s" podCreationTimestamp="2026-01-27 09:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:24.252281603 +0000 UTC m=+1308.543376454" watchObservedRunningTime="2026-01-27 09:15:24.26277004 +0000 UTC m=+1308.553864901" Jan 27 09:15:24 crc kubenswrapper[4985]: I0127 09:15:24.464409 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" path="/var/lib/kubelet/pods/e8169c55-2bdc-44a1-b0ea-6ceef864c34e/volumes" Jan 27 09:15:25 crc kubenswrapper[4985]: I0127 09:15:25.581878 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:15:26 crc kubenswrapper[4985]: I0127 09:15:26.944037 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:26 crc kubenswrapper[4985]: I0127 09:15:26.944095 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e8169c55-2bdc-44a1-b0ea-6ceef864c34e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:27 crc kubenswrapper[4985]: I0127 09:15:27.887897 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:15:27 crc kubenswrapper[4985]: I0127 09:15:27.888216 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:15:29 crc kubenswrapper[4985]: I0127 09:15:29.933066 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:15:29 crc kubenswrapper[4985]: I0127 09:15:29.933424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:15:30 crc kubenswrapper[4985]: I0127 09:15:30.582133 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:15:30 crc kubenswrapper[4985]: I0127 09:15:30.608454 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:15:30 crc kubenswrapper[4985]: I0127 09:15:30.947784 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:30 crc kubenswrapper[4985]: I0127 09:15:30.947851 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:31 crc kubenswrapper[4985]: I0127 09:15:31.322822 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:15:32 crc kubenswrapper[4985]: I0127 09:15:32.887706 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:15:32 crc kubenswrapper[4985]: I0127 09:15:32.887825 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:15:33 crc kubenswrapper[4985]: I0127 09:15:33.905728 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:33 crc kubenswrapper[4985]: I0127 09:15:33.905724 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:15:37 crc kubenswrapper[4985]: I0127 09:15:37.314634 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.952568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.954418 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.954769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.954801 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.962154 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:15:39 crc kubenswrapper[4985]: I0127 09:15:39.964044 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:15:41 crc kubenswrapper[4985]: I0127 09:15:41.732123 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:41 crc kubenswrapper[4985]: I0127 09:15:41.732673 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="92c780bc-e214-4b55-9c3e-2a09b962ac83" containerName="kube-state-metrics" containerID="cri-o://8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961" gracePeriod=30 Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.271182 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.360733 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfr28\" (UniqueName: \"kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28\") pod \"92c780bc-e214-4b55-9c3e-2a09b962ac83\" (UID: \"92c780bc-e214-4b55-9c3e-2a09b962ac83\") " Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.389537 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28" (OuterVolumeSpecName: "kube-api-access-cfr28") pod "92c780bc-e214-4b55-9c3e-2a09b962ac83" (UID: "92c780bc-e214-4b55-9c3e-2a09b962ac83"). InnerVolumeSpecName "kube-api-access-cfr28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.412771 4985 generic.go:334] "Generic (PLEG): container finished" podID="92c780bc-e214-4b55-9c3e-2a09b962ac83" containerID="8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961" exitCode=2 Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.412824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92c780bc-e214-4b55-9c3e-2a09b962ac83","Type":"ContainerDied","Data":"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961"} Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.412846 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92c780bc-e214-4b55-9c3e-2a09b962ac83","Type":"ContainerDied","Data":"f38f14bacd59f039efbb7f85eddbe0923646ee9c98e9a9275433ea633025fb8e"} Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.412865 4985 scope.go:117] "RemoveContainer" containerID="8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.413027 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.449690 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.467937 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfr28\" (UniqueName: \"kubernetes.io/projected/92c780bc-e214-4b55-9c3e-2a09b962ac83-kube-api-access-cfr28\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.475957 4985 scope.go:117] "RemoveContainer" containerID="8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.476338 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:42 crc kubenswrapper[4985]: E0127 09:15:42.476401 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961\": container with ID starting with 8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961 not found: ID does not exist" containerID="8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.476430 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961"} err="failed to get container status \"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961\": rpc error: code = NotFound desc = could not find container \"8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961\": container with ID starting with 8f5ced909e7f8f926c31ebaa5d6bb345bfb35177b7c6748db74dfcbb60700961 not found: ID does not exist" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.477901 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:42 crc kubenswrapper[4985]: E0127 09:15:42.478384 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c780bc-e214-4b55-9c3e-2a09b962ac83" containerName="kube-state-metrics" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.478473 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c780bc-e214-4b55-9c3e-2a09b962ac83" containerName="kube-state-metrics" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.478790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c780bc-e214-4b55-9c3e-2a09b962ac83" containerName="kube-state-metrics" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.480106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.485004 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.485078 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.492702 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.569661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.569859 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.569963 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb279\" (UniqueName: \"kubernetes.io/projected/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-api-access-rb279\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.570056 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.671818 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.671923 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.671979 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb279\" (UniqueName: \"kubernetes.io/projected/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-api-access-rb279\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.672053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.676432 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.677645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.678258 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d915c5-e7d5-4925-9f52-faf1b7f03716-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.699112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb279\" (UniqueName: \"kubernetes.io/projected/14d915c5-e7d5-4925-9f52-faf1b7f03716-kube-api-access-rb279\") pod \"kube-state-metrics-0\" (UID: \"14d915c5-e7d5-4925-9f52-faf1b7f03716\") " pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.811905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.917687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.917854 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.939137 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:15:42 crc kubenswrapper[4985]: I0127 09:15:42.947530 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.309758 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.312274 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.439722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14d915c5-e7d5-4925-9f52-faf1b7f03716","Type":"ContainerStarted","Data":"528c84732a487719f031f97482e7f807316d886134e286301b90442e47e36c43"} Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.706572 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.706846 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-central-agent" containerID="cri-o://6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb" gracePeriod=30 Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.706901 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="sg-core" containerID="cri-o://0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407" gracePeriod=30 Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.706929 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-notification-agent" containerID="cri-o://ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968" gracePeriod=30 Jan 27 09:15:43 crc kubenswrapper[4985]: I0127 09:15:43.706940 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="proxy-httpd" containerID="cri-o://07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.331111 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.331415 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e004edc1-a270-47e3-a299-3f798588eb34" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.430769 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.430989 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerName="nova-scheduler-scheduler" containerID="cri-o://589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.442178 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.442469 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.458087 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerID="07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8" exitCode=0 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.458127 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerID="0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407" exitCode=2 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.458140 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerID="6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb" exitCode=0 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.468535 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c780bc-e214-4b55-9c3e-2a09b962ac83" path="/var/lib/kubelet/pods/92c780bc-e214-4b55-9c3e-2a09b962ac83/volumes" Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerDied","Data":"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8"} Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470827 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470846 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerDied","Data":"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407"} Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerDied","Data":"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb"} Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.470873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14d915c5-e7d5-4925-9f52-faf1b7f03716","Type":"ContainerStarted","Data":"1e11efe4b1e45a10c341a16305e79678cea202335693b87a9b5f598680275a76"} Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.471086 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-log" containerID="cri-o://17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.471242 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-api" containerID="cri-o://286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d" gracePeriod=30 Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.482301 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.489703 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.042650716 podStartE2EDuration="2.489685307s" podCreationTimestamp="2026-01-27 09:15:42 +0000 UTC" firstStartedPulling="2026-01-27 09:15:43.309382666 +0000 UTC m=+1327.600477507" lastFinishedPulling="2026-01-27 09:15:43.756417257 +0000 UTC m=+1328.047512098" observedRunningTime="2026-01-27 09:15:44.48432686 +0000 UTC m=+1328.775421701" watchObservedRunningTime="2026-01-27 09:15:44.489685307 +0000 UTC m=+1328.780780148" Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.563906 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:44 crc kubenswrapper[4985]: I0127 09:15:44.564270 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="116ef18b-261c-457e-a687-782db009b9de" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d98f7170e8ba3c4bfdcae8597c61d70d73bca409cc420c41664686ac83d6e6b4" gracePeriod=30 Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.312961 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.432895 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data\") pod \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.433541 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle\") pod \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.433605 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs\") pod \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.433732 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs\") pod \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.433838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmtcm\" (UniqueName: \"kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm\") pod \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\" (UID: \"3e76b26e-1299-417d-8f51-f4c1bef4da0c\") " Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.440350 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm" (OuterVolumeSpecName: "kube-api-access-qmtcm") pod "3e76b26e-1299-417d-8f51-f4c1bef4da0c" (UID: "3e76b26e-1299-417d-8f51-f4c1bef4da0c"). InnerVolumeSpecName "kube-api-access-qmtcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.467667 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data" (OuterVolumeSpecName: "config-data") pod "3e76b26e-1299-417d-8f51-f4c1bef4da0c" (UID: "3e76b26e-1299-417d-8f51-f4c1bef4da0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.468064 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e76b26e-1299-417d-8f51-f4c1bef4da0c" (UID: "3e76b26e-1299-417d-8f51-f4c1bef4da0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.474637 4985 generic.go:334] "Generic (PLEG): container finished" podID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerID="17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a" exitCode=143 Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.474727 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerDied","Data":"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a"} Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.478740 4985 generic.go:334] "Generic (PLEG): container finished" podID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" containerID="44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764" exitCode=0 Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.480672 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.481379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e76b26e-1299-417d-8f51-f4c1bef4da0c","Type":"ContainerDied","Data":"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764"} Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.481413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e76b26e-1299-417d-8f51-f4c1bef4da0c","Type":"ContainerDied","Data":"68dd8a8e84bde5f83244ee021ac905ddba9a04bc8541f8b6005820bc1ac8e60d"} Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.481433 4985 scope.go:117] "RemoveContainer" containerID="44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.481781 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" containerID="cri-o://5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f" gracePeriod=30 Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.482891 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" containerID="cri-o://6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145" gracePeriod=30 Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.500343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "3e76b26e-1299-417d-8f51-f4c1bef4da0c" (UID: "3e76b26e-1299-417d-8f51-f4c1bef4da0c"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.517199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "3e76b26e-1299-417d-8f51-f4c1bef4da0c" (UID: "3e76b26e-1299-417d-8f51-f4c1bef4da0c"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.536027 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.536264 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.536331 4985 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.536402 4985 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e76b26e-1299-417d-8f51-f4c1bef4da0c-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.536463 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmtcm\" (UniqueName: \"kubernetes.io/projected/3e76b26e-1299-417d-8f51-f4c1bef4da0c-kube-api-access-qmtcm\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.559214 4985 scope.go:117] "RemoveContainer" containerID="44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764" Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.559671 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764\": container with ID starting with 44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764 not found: ID does not exist" containerID="44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.559713 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764"} err="failed to get container status \"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764\": rpc error: code = NotFound desc = could not find container \"44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764\": container with ID starting with 44a6bb5270191cf8c9e47ea67ab69486f31cd3774fe742386777c2616ab19764 not found: ID does not exist" Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.585075 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.586790 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.590150 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.590208 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerName="nova-scheduler-scheduler" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.814283 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.825440 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.836037 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:45 crc kubenswrapper[4985]: E0127 09:15:45.836548 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.836572 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.836820 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.838024 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.840832 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.841145 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.842738 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.850492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.943319 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.943864 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.944027 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.944270 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:45 crc kubenswrapper[4985]: I0127 09:15:45.944359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7772x\" (UniqueName: \"kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.047674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.047810 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.047875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.047914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.047949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7772x\" (UniqueName: \"kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.052483 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.052921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.053437 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.054368 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.071281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7772x\" (UniqueName: \"kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x\") pod \"nova-cell1-novncproxy-0\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.155340 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.465454 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e76b26e-1299-417d-8f51-f4c1bef4da0c" path="/var/lib/kubelet/pods/3e76b26e-1299-417d-8f51-f4c1bef4da0c/volumes" Jan 27 09:15:46 crc kubenswrapper[4985]: E0127 09:15:46.471273 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:15:46 crc kubenswrapper[4985]: E0127 09:15:46.476219 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:15:46 crc kubenswrapper[4985]: E0127 09:15:46.478289 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:15:46 crc kubenswrapper[4985]: E0127 09:15:46.478363 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e004edc1-a270-47e3-a299-3f798588eb34" containerName="nova-cell0-conductor-conductor" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.507282 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerID="5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f" exitCode=143 Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.507344 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerDied","Data":"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f"} Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.655586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:46 crc kubenswrapper[4985]: W0127 09:15:46.656763 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac2e67dd_037d_4a4e_bd63_18c0f2a46096.slice/crio-595fcdd02c5084ce5ddc49264e4957fdfa63a5888184740afd98e2ed134a3cc3 WatchSource:0}: Error finding container 595fcdd02c5084ce5ddc49264e4957fdfa63a5888184740afd98e2ed134a3cc3: Status 404 returned error can't find the container with id 595fcdd02c5084ce5ddc49264e4957fdfa63a5888184740afd98e2ed134a3cc3 Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.791707 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.874811 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.874903 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.874963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.875016 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.875072 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.875117 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.875140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tss2s\" (UniqueName: \"kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s\") pod \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\" (UID: \"7a0afade-f6ae-47f6-9977-9f3f0201fd4c\") " Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.875848 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.876287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.888347 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts" (OuterVolumeSpecName: "scripts") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.890009 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s" (OuterVolumeSpecName: "kube-api-access-tss2s") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "kube-api-access-tss2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.930336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.978243 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.978306 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.978403 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.978425 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tss2s\" (UniqueName: \"kubernetes.io/projected/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-kube-api-access-tss2s\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.978440 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.989112 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data" (OuterVolumeSpecName: "config-data") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:46 crc kubenswrapper[4985]: I0127 09:15:46.994661 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a0afade-f6ae-47f6-9977-9f3f0201fd4c" (UID: "7a0afade-f6ae-47f6-9977-9f3f0201fd4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.080058 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.080096 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0afade-f6ae-47f6-9977-9f3f0201fd4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.520227 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerID="ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968" exitCode=0 Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.520352 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerDied","Data":"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968"} Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.520758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a0afade-f6ae-47f6-9977-9f3f0201fd4c","Type":"ContainerDied","Data":"17b6706407c7f288621d28ad7331f68bab8428de8d405ea49543c999500b4a39"} Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.520387 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.520804 4985 scope.go:117] "RemoveContainer" containerID="07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.524045 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac2e67dd-037d-4a4e-bd63-18c0f2a46096","Type":"ContainerStarted","Data":"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47"} Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.524096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac2e67dd-037d-4a4e-bd63-18c0f2a46096","Type":"ContainerStarted","Data":"595fcdd02c5084ce5ddc49264e4957fdfa63a5888184740afd98e2ed134a3cc3"} Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.556748 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.556716982 podStartE2EDuration="2.556716982s" podCreationTimestamp="2026-01-27 09:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:47.544926989 +0000 UTC m=+1331.836021840" watchObservedRunningTime="2026-01-27 09:15:47.556716982 +0000 UTC m=+1331.847811823" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.559671 4985 scope.go:117] "RemoveContainer" containerID="0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.574950 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.577841 4985 scope.go:117] "RemoveContainer" containerID="ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.612210 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.623886 4985 scope.go:117] "RemoveContainer" containerID="6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665187 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.665648 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-central-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665664 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-central-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.665684 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="sg-core" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665690 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="sg-core" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.665703 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-notification-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665708 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-notification-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.665719 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="proxy-httpd" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665725 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="proxy-httpd" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665917 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="proxy-httpd" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665928 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-central-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665940 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="ceilometer-notification-agent" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.665958 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" containerName="sg-core" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.667945 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.678800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.679407 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.679661 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.679968 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.690499 4985 scope.go:117] "RemoveContainer" containerID="07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.692327 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8\": container with ID starting with 07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8 not found: ID does not exist" containerID="07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.692386 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8"} err="failed to get container status \"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8\": rpc error: code = NotFound desc = could not find container \"07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8\": container with ID starting with 07b7c34e9eb829e92d4806ac184bdc62e7a8dd935693ffb25bffdf51ae1499c8 not found: ID does not exist" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.692416 4985 scope.go:117] "RemoveContainer" containerID="0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.692801 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407\": container with ID starting with 0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407 not found: ID does not exist" containerID="0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.692852 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407"} err="failed to get container status \"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407\": rpc error: code = NotFound desc = could not find container \"0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407\": container with ID starting with 0029b4e35cbe9caa5c8447771a7dcc18a34aa34a0ff560129c887e77bc061407 not found: ID does not exist" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.692877 4985 scope.go:117] "RemoveContainer" containerID="ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.693100 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968\": container with ID starting with ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968 not found: ID does not exist" containerID="ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.693121 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968"} err="failed to get container status \"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968\": rpc error: code = NotFound desc = could not find container \"ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968\": container with ID starting with ad818b5377de9e76bbf8b14e4afd4fef17dff12debbdd626be2c17ef8927a968 not found: ID does not exist" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.693134 4985 scope.go:117] "RemoveContainer" containerID="6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb" Jan 27 09:15:47 crc kubenswrapper[4985]: E0127 09:15:47.694985 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb\": container with ID starting with 6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb not found: ID does not exist" containerID="6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.695009 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb"} err="failed to get container status \"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb\": rpc error: code = NotFound desc = could not find container \"6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb\": container with ID starting with 6a14b4381dc228dea9f8f98b33354e91f937940018760fd6415d7dfc231561bb not found: ID does not exist" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.708809 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76n5d\" (UniqueName: \"kubernetes.io/projected/ebebddf7-8341-4e17-a156-e251351db2fa-kube-api-access-76n5d\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.708960 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-scripts\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.708984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-run-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.709002 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.709581 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.709760 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-config-data\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.709830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.709934 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-log-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812253 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-config-data\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812331 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-log-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76n5d\" (UniqueName: \"kubernetes.io/projected/ebebddf7-8341-4e17-a156-e251351db2fa-kube-api-access-76n5d\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-scripts\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-run-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.812757 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.814289 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-log-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.820204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebebddf7-8341-4e17-a156-e251351db2fa-run-httpd\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.833705 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.838087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-config-data\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.838995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-scripts\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.839379 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.843142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebebddf7-8341-4e17-a156-e251351db2fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:47 crc kubenswrapper[4985]: I0127 09:15:47.847653 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76n5d\" (UniqueName: \"kubernetes.io/projected/ebebddf7-8341-4e17-a156-e251351db2fa-kube-api-access-76n5d\") pod \"ceilometer-0\" (UID: \"ebebddf7-8341-4e17-a156-e251351db2fa\") " pod="openstack/ceilometer-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.088280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.315349 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429729 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429796 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429867 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429922 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cklfk\" (UniqueName: \"kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.429983 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs\") pod \"7e2b17df-10df-4a9d-b026-3bf7f1517776\" (UID: \"7e2b17df-10df-4a9d-b026-3bf7f1517776\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.431281 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs" (OuterVolumeSpecName: "logs") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.437167 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk" (OuterVolumeSpecName: "kube-api-access-cklfk") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "kube-api-access-cklfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.485815 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0afade-f6ae-47f6-9977-9f3f0201fd4c" path="/var/lib/kubelet/pods/7a0afade-f6ae-47f6-9977-9f3f0201fd4c/volumes" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.508475 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.508753 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.512075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data" (OuterVolumeSpecName: "config-data") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.512296 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.514716 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e2b17df-10df-4a9d-b026-3bf7f1517776" (UID: "7e2b17df-10df-4a9d-b026-3bf7f1517776"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.535930 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.535966 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.535980 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.535989 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2b17df-10df-4a9d-b026-3bf7f1517776-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.535999 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cklfk\" (UniqueName: \"kubernetes.io/projected/7e2b17df-10df-4a9d-b026-3bf7f1517776-kube-api-access-cklfk\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.536008 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2b17df-10df-4a9d-b026-3bf7f1517776-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.547873 4985 generic.go:334] "Generic (PLEG): container finished" podID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" exitCode=0 Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.548080 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.548171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcf1c027-3aed-4213-b1ce-2f9fcab702aa","Type":"ContainerDied","Data":"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4"} Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.548227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcf1c027-3aed-4213-b1ce-2f9fcab702aa","Type":"ContainerDied","Data":"61a4d9f07378186d764cd8ba37bfd00fcdd5427b907cb1b7721f82c3ba0cd3b9"} Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.548248 4985 scope.go:117] "RemoveContainer" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.557673 4985 generic.go:334] "Generic (PLEG): container finished" podID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerID="286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d" exitCode=0 Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.557865 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerDied","Data":"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d"} Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.557904 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2b17df-10df-4a9d-b026-3bf7f1517776","Type":"ContainerDied","Data":"cc8449e94045bbfec6f3124281689711dcbba69b90a0f122dde0db1a8e59a7e1"} Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.557998 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.578239 4985 generic.go:334] "Generic (PLEG): container finished" podID="116ef18b-261c-457e-a687-782db009b9de" containerID="d98f7170e8ba3c4bfdcae8597c61d70d73bca409cc420c41664686ac83d6e6b4" exitCode=0 Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.578454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"116ef18b-261c-457e-a687-782db009b9de","Type":"ContainerDied","Data":"d98f7170e8ba3c4bfdcae8597c61d70d73bca409cc420c41664686ac83d6e6b4"} Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.606330 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.625755 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.626034 4985 scope.go:117] "RemoveContainer" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.626955 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4\": container with ID starting with 589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4 not found: ID does not exist" containerID="589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.626988 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4"} err="failed to get container status \"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4\": rpc error: code = NotFound desc = could not find container \"589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4\": container with ID starting with 589cb8d0328c30493a415d367f844167c6192ded6bd85f36ec938643fc6fe7e4 not found: ID does not exist" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.627028 4985 scope.go:117] "RemoveContainer" containerID="286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.638062 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data\") pod \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.638113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4tt6\" (UniqueName: \"kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6\") pod \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.638271 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle\") pod \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\" (UID: \"dcf1c027-3aed-4213-b1ce-2f9fcab702aa\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.639637 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.640071 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerName="nova-scheduler-scheduler" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640086 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerName="nova-scheduler-scheduler" Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.640114 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-api" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640120 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-api" Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.640131 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-log" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640137 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-log" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640367 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-log" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640381 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" containerName="nova-api-api" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.640399 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" containerName="nova-scheduler-scheduler" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.641430 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.646246 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6" (OuterVolumeSpecName: "kube-api-access-t4tt6") pod "dcf1c027-3aed-4213-b1ce-2f9fcab702aa" (UID: "dcf1c027-3aed-4213-b1ce-2f9fcab702aa"). InnerVolumeSpecName "kube-api-access-t4tt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.663757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.664014 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.664138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.664439 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:49814->10.217.0.217:8775: read: connection reset by peer" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.664763 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:49806->10.217.0.217:8775: read: connection reset by peer" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.681854 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.693076 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.693543 4985 scope.go:117] "RemoveContainer" containerID="17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.721334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data" (OuterVolumeSpecName: "config-data") pod "dcf1c027-3aed-4213-b1ce-2f9fcab702aa" (UID: "dcf1c027-3aed-4213-b1ce-2f9fcab702aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.727793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcf1c027-3aed-4213-b1ce-2f9fcab702aa" (UID: "dcf1c027-3aed-4213-b1ce-2f9fcab702aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741330 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741444 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtm9\" (UniqueName: \"kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741594 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.741620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.742037 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.742081 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4tt6\" (UniqueName: \"kubernetes.io/projected/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-kube-api-access-t4tt6\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.742095 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf1c027-3aed-4213-b1ce-2f9fcab702aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.760483 4985 scope.go:117] "RemoveContainer" containerID="286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d" Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.761535 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d\": container with ID starting with 286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d not found: ID does not exist" containerID="286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.761574 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d"} err="failed to get container status \"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d\": rpc error: code = NotFound desc = could not find container \"286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d\": container with ID starting with 286ec3c817da6030269a7145b36a679536d740ebdf5c890add4145d061780a7d not found: ID does not exist" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.761597 4985 scope.go:117] "RemoveContainer" containerID="17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a" Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.762292 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a\": container with ID starting with 17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a not found: ID does not exist" containerID="17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.762327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a"} err="failed to get container status \"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a\": rpc error: code = NotFound desc = could not find container \"17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a\": container with ID starting with 17ec1d244b3e94b9ebdf61b9b8b73fcdc29f8f5dbdf33b39aa94f9dd834e1f2a not found: ID does not exist" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.788401 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.842747 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data\") pod \"116ef18b-261c-457e-a687-782db009b9de\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.842813 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjzfs\" (UniqueName: \"kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs\") pod \"116ef18b-261c-457e-a687-782db009b9de\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.842982 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle\") pod \"116ef18b-261c-457e-a687-782db009b9de\" (UID: \"116ef18b-261c-457e-a687-782db009b9de\") " Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843351 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhtm9\" (UniqueName: \"kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.843441 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.846201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.847817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.848076 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.848817 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs" (OuterVolumeSpecName: "kube-api-access-xjzfs") pod "116ef18b-261c-457e-a687-782db009b9de" (UID: "116ef18b-261c-457e-a687-782db009b9de"). InnerVolumeSpecName "kube-api-access-xjzfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.849179 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.852921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.865737 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhtm9\" (UniqueName: \"kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9\") pod \"nova-api-0\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " pod="openstack/nova-api-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.879563 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data" (OuterVolumeSpecName: "config-data") pod "116ef18b-261c-457e-a687-782db009b9de" (UID: "116ef18b-261c-457e-a687-782db009b9de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.881470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "116ef18b-261c-457e-a687-782db009b9de" (UID: "116ef18b-261c-457e-a687-782db009b9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.923329 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.933438 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.945627 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.945674 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjzfs\" (UniqueName: \"kubernetes.io/projected/116ef18b-261c-457e-a687-782db009b9de-kube-api-access-xjzfs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.945687 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116ef18b-261c-457e-a687-782db009b9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.945641 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: E0127 09:15:48.947029 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116ef18b-261c-457e-a687-782db009b9de" containerName="nova-cell1-conductor-conductor" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.947128 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="116ef18b-261c-457e-a687-782db009b9de" containerName="nova-cell1-conductor-conductor" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.947562 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="116ef18b-261c-457e-a687-782db009b9de" containerName="nova-cell1-conductor-conductor" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.948587 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.952543 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.969186 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:48 crc kubenswrapper[4985]: I0127 09:15:48.995047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.041855 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.047115 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.047297 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5x7\" (UniqueName: \"kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.047336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: E0127 09:15:49.092824 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcf1c027_3aed_4213_b1ce_2f9fcab702aa.slice\": RecentStats: unable to find data in memory cache]" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.148506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle\") pod \"7bb7cf54-ef5f-41f6-b383-48b387842365\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.148677 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hljcj\" (UniqueName: \"kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj\") pod \"7bb7cf54-ef5f-41f6-b383-48b387842365\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.148710 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs\") pod \"7bb7cf54-ef5f-41f6-b383-48b387842365\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.148796 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs\") pod \"7bb7cf54-ef5f-41f6-b383-48b387842365\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.148819 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data\") pod \"7bb7cf54-ef5f-41f6-b383-48b387842365\" (UID: \"7bb7cf54-ef5f-41f6-b383-48b387842365\") " Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.149227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.149359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5x7\" (UniqueName: \"kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.149390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.150825 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs" (OuterVolumeSpecName: "logs") pod "7bb7cf54-ef5f-41f6-b383-48b387842365" (UID: "7bb7cf54-ef5f-41f6-b383-48b387842365"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.157270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.158497 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj" (OuterVolumeSpecName: "kube-api-access-hljcj") pod "7bb7cf54-ef5f-41f6-b383-48b387842365" (UID: "7bb7cf54-ef5f-41f6-b383-48b387842365"). InnerVolumeSpecName "kube-api-access-hljcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.161242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.169676 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5x7\" (UniqueName: \"kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7\") pod \"nova-scheduler-0\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.194665 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bb7cf54-ef5f-41f6-b383-48b387842365" (UID: "7bb7cf54-ef5f-41f6-b383-48b387842365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.194710 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data" (OuterVolumeSpecName: "config-data") pod "7bb7cf54-ef5f-41f6-b383-48b387842365" (UID: "7bb7cf54-ef5f-41f6-b383-48b387842365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.224536 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7bb7cf54-ef5f-41f6-b383-48b387842365" (UID: "7bb7cf54-ef5f-41f6-b383-48b387842365"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.251706 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.251746 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hljcj\" (UniqueName: \"kubernetes.io/projected/7bb7cf54-ef5f-41f6-b383-48b387842365-kube-api-access-hljcj\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.251759 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.251767 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bb7cf54-ef5f-41f6-b383-48b387842365-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.251779 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7cf54-ef5f-41f6-b383-48b387842365-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.287706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.527129 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: W0127 09:15:49.531939 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod984c0b5b_012f_4d5d_ba78_fe567ae73d59.slice/crio-ad2840930b4ead3ca965d7410576a788e5989b410405d4fe0606560dedbea69b WatchSource:0}: Error finding container ad2840930b4ead3ca965d7410576a788e5989b410405d4fe0606560dedbea69b: Status 404 returned error can't find the container with id ad2840930b4ead3ca965d7410576a788e5989b410405d4fe0606560dedbea69b Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.605248 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebebddf7-8341-4e17-a156-e251351db2fa","Type":"ContainerStarted","Data":"afc7a7ce6ebdb2f7c87c08e46965d37a383fd2a70a601b42eed51f890f7b28bb"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.605634 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebebddf7-8341-4e17-a156-e251351db2fa","Type":"ContainerStarted","Data":"44c379df23e3f4ddcb4c04449cd712d30d2a2b5c5e8103e24821dee04923c3ad"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.606909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerStarted","Data":"ad2840930b4ead3ca965d7410576a788e5989b410405d4fe0606560dedbea69b"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.609700 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerID="6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145" exitCode=0 Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.609780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerDied","Data":"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.609805 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bb7cf54-ef5f-41f6-b383-48b387842365","Type":"ContainerDied","Data":"50fc992c07f5e46a017cf30bcf0a32ae01fb85ae9acc88ea02a16e5b8f88424d"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.609825 4985 scope.go:117] "RemoveContainer" containerID="6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.610054 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.613863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"116ef18b-261c-457e-a687-782db009b9de","Type":"ContainerDied","Data":"0c16384d03b12bbb9d1b60af0575da964e0ef6e0545dd550f97c32e271b98403"} Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.614016 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.653274 4985 scope.go:117] "RemoveContainer" containerID="5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.664525 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.688664 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.700599 4985 scope.go:117] "RemoveContainer" containerID="6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145" Jan 27 09:15:49 crc kubenswrapper[4985]: E0127 09:15:49.701128 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145\": container with ID starting with 6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145 not found: ID does not exist" containerID="6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.701166 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145"} err="failed to get container status \"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145\": rpc error: code = NotFound desc = could not find container \"6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145\": container with ID starting with 6a64930be053f4740ae023cf563d68a8fc417713508136c09939541b90d48145 not found: ID does not exist" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.701191 4985 scope.go:117] "RemoveContainer" containerID="5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f" Jan 27 09:15:49 crc kubenswrapper[4985]: E0127 09:15:49.703144 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f\": container with ID starting with 5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f not found: ID does not exist" containerID="5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.703177 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f"} err="failed to get container status \"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f\": rpc error: code = NotFound desc = could not find container \"5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f\": container with ID starting with 5095f9f2579bc30191d9f57462b678288c95db301c72ed04c7d7513266e9158f not found: ID does not exist" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.703202 4985 scope.go:117] "RemoveContainer" containerID="d98f7170e8ba3c4bfdcae8597c61d70d73bca409cc420c41664686ac83d6e6b4" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.711348 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.725573 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.738795 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: E0127 09:15:49.739572 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.739664 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" Jan 27 09:15:49 crc kubenswrapper[4985]: E0127 09:15:49.739760 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.739861 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.740199 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-log" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.740294 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" containerName="nova-metadata-metadata" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.742592 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.747179 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.747261 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.753066 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.754908 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.760079 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.784301 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.798440 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866374 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866822 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4sl\" (UniqueName: \"kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866905 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.866984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.867011 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcx88\" (UniqueName: \"kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.867130 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.868355 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969266 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcx88\" (UniqueName: \"kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969353 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969375 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.969422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4sl\" (UniqueName: \"kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.970865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.982747 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.983500 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.983965 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.986071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:49 crc kubenswrapper[4985]: I0127 09:15:49.990042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.005808 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcx88\" (UniqueName: \"kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88\") pod \"nova-metadata-0\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " pod="openstack/nova-metadata-0" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.006927 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4sl\" (UniqueName: \"kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl\") pod \"nova-cell1-conductor-0\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.092088 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.098823 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.121573 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.143150 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.143354 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47" gracePeriod=30 Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.154117 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.177166 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.474057 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116ef18b-261c-457e-a687-782db009b9de" path="/var/lib/kubelet/pods/116ef18b-261c-457e-a687-782db009b9de/volumes" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.475462 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb7cf54-ef5f-41f6-b383-48b387842365" path="/var/lib/kubelet/pods/7bb7cf54-ef5f-41f6-b383-48b387842365/volumes" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.476348 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2b17df-10df-4a9d-b026-3bf7f1517776" path="/var/lib/kubelet/pods/7e2b17df-10df-4a9d-b026-3bf7f1517776/volumes" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.477347 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf1c027-3aed-4213-b1ce-2f9fcab702aa" path="/var/lib/kubelet/pods/dcf1c027-3aed-4213-b1ce-2f9fcab702aa/volumes" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.619605 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.639915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebebddf7-8341-4e17-a156-e251351db2fa","Type":"ContainerStarted","Data":"f2f9903d4652031289c52fc6b9017cb07697ec5e58e7534f3b5f4a50699524ed"} Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.653046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerStarted","Data":"93f73fd8c55dad032f70c0523733a3ff3c9b797ea7c0b371f3417417e4b8a6ee"} Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.653092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerStarted","Data":"32f60c16a3b36c3d74a6d2ed188b91486608d0d6e261b8b95c7d78880f2bc35d"} Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.653234 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-log" containerID="cri-o://32f60c16a3b36c3d74a6d2ed188b91486608d0d6e261b8b95c7d78880f2bc35d" gracePeriod=30 Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.653810 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-api" containerID="cri-o://93f73fd8c55dad032f70c0523733a3ff3c9b797ea7c0b371f3417417e4b8a6ee" gracePeriod=30 Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.663136 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70822cc5-7296-435f-83fe-e69451683506","Type":"ContainerStarted","Data":"fb31ff67e536607ecfb581b140d0f211bc2922abf65bed281ca514b360e3a6c6"} Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.663176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70822cc5-7296-435f-83fe-e69451683506","Type":"ContainerStarted","Data":"4524a43090f7d16054fa098e8abeb0b5f63a40a4c913a4a44be068ecfbf4309d"} Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.663243 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="70822cc5-7296-435f-83fe-e69451683506" containerName="nova-scheduler-scheduler" containerID="cri-o://fb31ff67e536607ecfb581b140d0f211bc2922abf65bed281ca514b360e3a6c6" gracePeriod=30 Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.701843 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701821689 podStartE2EDuration="2.701821689s" podCreationTimestamp="2026-01-27 09:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:50.690674313 +0000 UTC m=+1334.981769174" watchObservedRunningTime="2026-01-27 09:15:50.701821689 +0000 UTC m=+1334.992916530" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.710764 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7107416029999998 podStartE2EDuration="2.710741603s" podCreationTimestamp="2026-01-27 09:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:50.709178181 +0000 UTC m=+1335.000273032" watchObservedRunningTime="2026-01-27 09:15:50.710741603 +0000 UTC m=+1335.001836444" Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.737686 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.781286 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:15:50 crc kubenswrapper[4985]: W0127 09:15:50.844795 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9433322_e6a7_4643_b9f9_87853d285a08.slice/crio-79156917959ffa0b0216bd41b8b04454db016d063507b956be8d4592b97cc42b WatchSource:0}: Error finding container 79156917959ffa0b0216bd41b8b04454db016d063507b956be8d4592b97cc42b: Status 404 returned error can't find the container with id 79156917959ffa0b0216bd41b8b04454db016d063507b956be8d4592b97cc42b Jan 27 09:15:50 crc kubenswrapper[4985]: I0127 09:15:50.995423 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.103968 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zc4q\" (UniqueName: \"kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q\") pod \"e004edc1-a270-47e3-a299-3f798588eb34\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.105591 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle\") pod \"e004edc1-a270-47e3-a299-3f798588eb34\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.106058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data\") pod \"e004edc1-a270-47e3-a299-3f798588eb34\" (UID: \"e004edc1-a270-47e3-a299-3f798588eb34\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.109417 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q" (OuterVolumeSpecName: "kube-api-access-7zc4q") pod "e004edc1-a270-47e3-a299-3f798588eb34" (UID: "e004edc1-a270-47e3-a299-3f798588eb34"). InnerVolumeSpecName "kube-api-access-7zc4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.152640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e004edc1-a270-47e3-a299-3f798588eb34" (UID: "e004edc1-a270-47e3-a299-3f798588eb34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.155432 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.155629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data" (OuterVolumeSpecName: "config-data") pod "e004edc1-a270-47e3-a299-3f798588eb34" (UID: "e004edc1-a270-47e3-a299-3f798588eb34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.211314 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zc4q\" (UniqueName: \"kubernetes.io/projected/e004edc1-a270-47e3-a299-3f798588eb34-kube-api-access-7zc4q\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.211357 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.211370 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e004edc1-a270-47e3-a299-3f798588eb34-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.349483 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.422590 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs\") pod \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.422679 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle\") pod \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.422712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data\") pod \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.422760 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs\") pod \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.422792 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7772x\" (UniqueName: \"kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x\") pod \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\" (UID: \"ac2e67dd-037d-4a4e-bd63-18c0f2a46096\") " Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.440081 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x" (OuterVolumeSpecName: "kube-api-access-7772x") pod "ac2e67dd-037d-4a4e-bd63-18c0f2a46096" (UID: "ac2e67dd-037d-4a4e-bd63-18c0f2a46096"). InnerVolumeSpecName "kube-api-access-7772x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.501349 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac2e67dd-037d-4a4e-bd63-18c0f2a46096" (UID: "ac2e67dd-037d-4a4e-bd63-18c0f2a46096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.505132 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data" (OuterVolumeSpecName: "config-data") pod "ac2e67dd-037d-4a4e-bd63-18c0f2a46096" (UID: "ac2e67dd-037d-4a4e-bd63-18c0f2a46096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.527588 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.527637 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.527647 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7772x\" (UniqueName: \"kubernetes.io/projected/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-kube-api-access-7772x\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.545146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ac2e67dd-037d-4a4e-bd63-18c0f2a46096" (UID: "ac2e67dd-037d-4a4e-bd63-18c0f2a46096"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.551706 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ac2e67dd-037d-4a4e-bd63-18c0f2a46096" (UID: "ac2e67dd-037d-4a4e-bd63-18c0f2a46096"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.629253 4985 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.629308 4985 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac2e67dd-037d-4a4e-bd63-18c0f2a46096-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.681685 4985 generic.go:334] "Generic (PLEG): container finished" podID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerID="32f60c16a3b36c3d74a6d2ed188b91486608d0d6e261b8b95c7d78880f2bc35d" exitCode=143 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.681747 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerDied","Data":"32f60c16a3b36c3d74a6d2ed188b91486608d0d6e261b8b95c7d78880f2bc35d"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.683839 4985 generic.go:334] "Generic (PLEG): container finished" podID="e004edc1-a270-47e3-a299-3f798588eb34" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" exitCode=0 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.683884 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e004edc1-a270-47e3-a299-3f798588eb34","Type":"ContainerDied","Data":"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.683906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e004edc1-a270-47e3-a299-3f798588eb34","Type":"ContainerDied","Data":"6a7ac416c60d96c5f43f6693f7b21dad275061881b7118b03cbbbb6de5653ca8"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.683922 4985 scope.go:117] "RemoveContainer" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.684038 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.687456 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4","Type":"ContainerStarted","Data":"b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.687505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4","Type":"ContainerStarted","Data":"37df5109ae43d60f9cef07000cf86f90fd22f43882cb826a13cd6592435ceb2f"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.687560 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" containerID="cri-o://b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" gracePeriod=30 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.687580 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.690202 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerStarted","Data":"793c3194622ba9f315c3d47081946c5940cb6ca274437ed0f6cd9f5ffd06eeca"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.690237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerStarted","Data":"0765b4bc75572ea238220e8c8366b062f8a105a38565055e853b3ac0fd023e6a"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.690248 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerStarted","Data":"79156917959ffa0b0216bd41b8b04454db016d063507b956be8d4592b97cc42b"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.690342 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-log" containerID="cri-o://0765b4bc75572ea238220e8c8366b062f8a105a38565055e853b3ac0fd023e6a" gracePeriod=30 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.690424 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-metadata" containerID="cri-o://793c3194622ba9f315c3d47081946c5940cb6ca274437ed0f6cd9f5ffd06eeca" gracePeriod=30 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.705336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebebddf7-8341-4e17-a156-e251351db2fa","Type":"ContainerStarted","Data":"10dfd7a7e052d1e332cf27d9a085bbec5e4558c8cef8730feeb419124e7a03fc"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.708276 4985 generic.go:334] "Generic (PLEG): container finished" podID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" containerID="dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47" exitCode=0 Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.708309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac2e67dd-037d-4a4e-bd63-18c0f2a46096","Type":"ContainerDied","Data":"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.708329 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ac2e67dd-037d-4a4e-bd63-18c0f2a46096","Type":"ContainerDied","Data":"595fcdd02c5084ce5ddc49264e4957fdfa63a5888184740afd98e2ed134a3cc3"} Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.708404 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.718545 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.718521842 podStartE2EDuration="2.718521842s" podCreationTimestamp="2026-01-27 09:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:51.714221635 +0000 UTC m=+1336.005316476" watchObservedRunningTime="2026-01-27 09:15:51.718521842 +0000 UTC m=+1336.009616683" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.730783 4985 scope.go:117] "RemoveContainer" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" Jan 27 09:15:51 crc kubenswrapper[4985]: E0127 09:15:51.731500 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f\": container with ID starting with f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f not found: ID does not exist" containerID="f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.731559 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f"} err="failed to get container status \"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f\": rpc error: code = NotFound desc = could not find container \"f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f\": container with ID starting with f76d4cab022328560c37b13e931f450bb87fd57d67e2ba446fb15ea0f2b44e6f not found: ID does not exist" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.731597 4985 scope.go:117] "RemoveContainer" containerID="dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.792394 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.792365697 podStartE2EDuration="2.792365697s" podCreationTimestamp="2026-01-27 09:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:51.757030199 +0000 UTC m=+1336.048125050" watchObservedRunningTime="2026-01-27 09:15:51.792365697 +0000 UTC m=+1336.083460538" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.827899 4985 scope.go:117] "RemoveContainer" containerID="dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47" Jan 27 09:15:51 crc kubenswrapper[4985]: E0127 09:15:51.833430 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47\": container with ID starting with dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47 not found: ID does not exist" containerID="dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.833598 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47"} err="failed to get container status \"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47\": rpc error: code = NotFound desc = could not find container \"dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47\": container with ID starting with dae13ef99a27ab4fd3b2d024cd425b68f982b6a96d54a7e63848bc340e1b3e47 not found: ID does not exist" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.839609 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.870631 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.887169 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: E0127 09:15:51.887942 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.887972 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:51 crc kubenswrapper[4985]: E0127 09:15:51.888000 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e004edc1-a270-47e3-a299-3f798588eb34" containerName="nova-cell0-conductor-conductor" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.888009 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e004edc1-a270-47e3-a299-3f798588eb34" containerName="nova-cell0-conductor-conductor" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.888292 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e004edc1-a270-47e3-a299-3f798588eb34" containerName="nova-cell0-conductor-conductor" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.888316 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.889398 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.893932 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.894650 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.910856 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.934282 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.937291 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbk7\" (UniqueName: \"kubernetes.io/projected/9b7800ec-036a-4b19-98de-4404f3c7fbcc-kube-api-access-2jbk7\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.937365 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.937483 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.945787 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.947046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.949822 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.950885 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.956353 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 09:15:51 crc kubenswrapper[4985]: I0127 09:15:51.957665 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.040399 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txbl\" (UniqueName: \"kubernetes.io/projected/d7109e8e-aed9-4dbf-9746-46e772bb7979-kube-api-access-5txbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.040908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.040946 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.041028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.041049 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.041079 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbk7\" (UniqueName: \"kubernetes.io/projected/9b7800ec-036a-4b19-98de-4404f3c7fbcc-kube-api-access-2jbk7\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.041109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.041135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.046674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.052522 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7800ec-036a-4b19-98de-4404f3c7fbcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.059964 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbk7\" (UniqueName: \"kubernetes.io/projected/9b7800ec-036a-4b19-98de-4404f3c7fbcc-kube-api-access-2jbk7\") pod \"nova-cell0-conductor-0\" (UID: \"9b7800ec-036a-4b19-98de-4404f3c7fbcc\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.142545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.142609 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.142650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.142769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txbl\" (UniqueName: \"kubernetes.io/projected/d7109e8e-aed9-4dbf-9746-46e772bb7979-kube-api-access-5txbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.142805 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.148250 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.149061 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.151707 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.155194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7109e8e-aed9-4dbf-9746-46e772bb7979-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.171111 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txbl\" (UniqueName: \"kubernetes.io/projected/d7109e8e-aed9-4dbf-9746-46e772bb7979-kube-api-access-5txbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7109e8e-aed9-4dbf-9746-46e772bb7979\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.217686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.270129 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.470692 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2e67dd-037d-4a4e-bd63-18c0f2a46096" path="/var/lib/kubelet/pods/ac2e67dd-037d-4a4e-bd63-18c0f2a46096/volumes" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.472352 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e004edc1-a270-47e3-a299-3f798588eb34" path="/var/lib/kubelet/pods/e004edc1-a270-47e3-a299-3f798588eb34/volumes" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.731732 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.740579 4985 generic.go:334] "Generic (PLEG): container finished" podID="b9433322-e6a7-4643-b9f9-87853d285a08" containerID="0765b4bc75572ea238220e8c8366b062f8a105a38565055e853b3ac0fd023e6a" exitCode=143 Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.740643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerDied","Data":"0765b4bc75572ea238220e8c8366b062f8a105a38565055e853b3ac0fd023e6a"} Jan 27 09:15:52 crc kubenswrapper[4985]: W0127 09:15:52.743620 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b7800ec_036a_4b19_98de_4404f3c7fbcc.slice/crio-618f727be882c1cf72715f63a1e78c972c2301748126d91e01c958cabf0c54a4 WatchSource:0}: Error finding container 618f727be882c1cf72715f63a1e78c972c2301748126d91e01c958cabf0c54a4: Status 404 returned error can't find the container with id 618f727be882c1cf72715f63a1e78c972c2301748126d91e01c958cabf0c54a4 Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.843861 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 09:15:52 crc kubenswrapper[4985]: I0127 09:15:52.876411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.757496 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebebddf7-8341-4e17-a156-e251351db2fa","Type":"ContainerStarted","Data":"aed87cdd5f62fe7b992d35fb77327c9407eb02e7ab2bdbb915f86020cc6a4907"} Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.759274 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.761144 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7800ec-036a-4b19-98de-4404f3c7fbcc","Type":"ContainerStarted","Data":"4e20c057ecfc7eeede90621b0a961cd1fc2940e5d7fc34c618bc462a82cf9a6e"} Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.761184 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7800ec-036a-4b19-98de-4404f3c7fbcc","Type":"ContainerStarted","Data":"618f727be882c1cf72715f63a1e78c972c2301748126d91e01c958cabf0c54a4"} Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.761982 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.766213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d7109e8e-aed9-4dbf-9746-46e772bb7979","Type":"ContainerStarted","Data":"e10e49215352f0f08655335f3820146e8ec2663f3d254b826a43a65fca23061b"} Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.766264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d7109e8e-aed9-4dbf-9746-46e772bb7979","Type":"ContainerStarted","Data":"6c5e5a3537feb13013031d304fb12a07c53a55a548fa24ed859014b10e251ea3"} Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.809467 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.959660294 podStartE2EDuration="6.809449247s" podCreationTimestamp="2026-01-27 09:15:47 +0000 UTC" firstStartedPulling="2026-01-27 09:15:48.763224511 +0000 UTC m=+1333.054319342" lastFinishedPulling="2026-01-27 09:15:52.613013454 +0000 UTC m=+1336.904108295" observedRunningTime="2026-01-27 09:15:53.785072819 +0000 UTC m=+1338.076167680" watchObservedRunningTime="2026-01-27 09:15:53.809449247 +0000 UTC m=+1338.100544088" Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.812632 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.812620105 podStartE2EDuration="2.812620105s" podCreationTimestamp="2026-01-27 09:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:53.805329124 +0000 UTC m=+1338.096423975" watchObservedRunningTime="2026-01-27 09:15:53.812620105 +0000 UTC m=+1338.103714936" Jan 27 09:15:53 crc kubenswrapper[4985]: I0127 09:15:53.823266 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.823244345 podStartE2EDuration="2.823244345s" podCreationTimestamp="2026-01-27 09:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:15:53.818834915 +0000 UTC m=+1338.109929756" watchObservedRunningTime="2026-01-27 09:15:53.823244345 +0000 UTC m=+1338.114339186" Jan 27 09:15:54 crc kubenswrapper[4985]: I0127 09:15:54.288829 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:15:55 crc kubenswrapper[4985]: I0127 09:15:55.093670 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:15:55 crc kubenswrapper[4985]: I0127 09:15:55.093953 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:15:57 crc kubenswrapper[4985]: I0127 09:15:57.256646 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 09:15:57 crc kubenswrapper[4985]: I0127 09:15:57.270298 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:16:00 crc kubenswrapper[4985]: E0127 09:16:00.102950 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:00 crc kubenswrapper[4985]: E0127 09:16:00.106465 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:00 crc kubenswrapper[4985]: E0127 09:16:00.108335 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:00 crc kubenswrapper[4985]: E0127 09:16:00.108376 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:02 crc kubenswrapper[4985]: I0127 09:16:02.270975 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:16:02 crc kubenswrapper[4985]: I0127 09:16:02.293866 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:16:02 crc kubenswrapper[4985]: I0127 09:16:02.904405 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:16:05 crc kubenswrapper[4985]: E0127 09:16:05.101648 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:05 crc kubenswrapper[4985]: E0127 09:16:05.105289 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:05 crc kubenswrapper[4985]: E0127 09:16:05.107494 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:05 crc kubenswrapper[4985]: E0127 09:16:05.107575 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:10 crc kubenswrapper[4985]: E0127 09:16:10.102186 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:10 crc kubenswrapper[4985]: E0127 09:16:10.104994 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:10 crc kubenswrapper[4985]: E0127 09:16:10.106448 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:10 crc kubenswrapper[4985]: E0127 09:16:10.106482 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:15 crc kubenswrapper[4985]: E0127 09:16:15.101578 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:15 crc kubenswrapper[4985]: E0127 09:16:15.103989 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:15 crc kubenswrapper[4985]: E0127 09:16:15.105387 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:15 crc kubenswrapper[4985]: E0127 09:16:15.105428 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:18 crc kubenswrapper[4985]: I0127 09:16:18.098293 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 09:16:18 crc kubenswrapper[4985]: I0127 09:16:18.995521 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:16:18 crc kubenswrapper[4985]: I0127 09:16:18.995992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:16:20 crc kubenswrapper[4985]: E0127 09:16:20.107468 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:20 crc kubenswrapper[4985]: E0127 09:16:20.110028 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:20 crc kubenswrapper[4985]: E0127 09:16:20.123920 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:16:20 crc kubenswrapper[4985]: E0127 09:16:20.124003 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.091037 4985 generic.go:334] "Generic (PLEG): container finished" podID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" exitCode=137 Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.091124 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4","Type":"ContainerDied","Data":"b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e"} Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.094861 4985 generic.go:334] "Generic (PLEG): container finished" podID="b9433322-e6a7-4643-b9f9-87853d285a08" containerID="793c3194622ba9f315c3d47081946c5940cb6ca274437ed0f6cd9f5ffd06eeca" exitCode=137 Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.094926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerDied","Data":"793c3194622ba9f315c3d47081946c5940cb6ca274437ed0f6cd9f5ffd06eeca"} Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.096947 4985 generic.go:334] "Generic (PLEG): container finished" podID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerID="93f73fd8c55dad032f70c0523733a3ff3c9b797ea7c0b371f3417417e4b8a6ee" exitCode=137 Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.097003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerDied","Data":"93f73fd8c55dad032f70c0523733a3ff3c9b797ea7c0b371f3417417e4b8a6ee"} Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.098630 4985 generic.go:334] "Generic (PLEG): container finished" podID="70822cc5-7296-435f-83fe-e69451683506" containerID="fb31ff67e536607ecfb581b140d0f211bc2922abf65bed281ca514b360e3a6c6" exitCode=137 Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.098672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70822cc5-7296-435f-83fe-e69451683506","Type":"ContainerDied","Data":"fb31ff67e536607ecfb581b140d0f211bc2922abf65bed281ca514b360e3a6c6"} Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.906979 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.961317 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcx88\" (UniqueName: \"kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88\") pod \"b9433322-e6a7-4643-b9f9-87853d285a08\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.961474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs\") pod \"b9433322-e6a7-4643-b9f9-87853d285a08\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.961563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs\") pod \"b9433322-e6a7-4643-b9f9-87853d285a08\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.961625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle\") pod \"b9433322-e6a7-4643-b9f9-87853d285a08\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.961688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data\") pod \"b9433322-e6a7-4643-b9f9-87853d285a08\" (UID: \"b9433322-e6a7-4643-b9f9-87853d285a08\") " Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.962618 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs" (OuterVolumeSpecName: "logs") pod "b9433322-e6a7-4643-b9f9-87853d285a08" (UID: "b9433322-e6a7-4643-b9f9-87853d285a08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.969872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88" (OuterVolumeSpecName: "kube-api-access-hcx88") pod "b9433322-e6a7-4643-b9f9-87853d285a08" (UID: "b9433322-e6a7-4643-b9f9-87853d285a08"). InnerVolumeSpecName "kube-api-access-hcx88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.997218 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data" (OuterVolumeSpecName: "config-data") pod "b9433322-e6a7-4643-b9f9-87853d285a08" (UID: "b9433322-e6a7-4643-b9f9-87853d285a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:24 crc kubenswrapper[4985]: I0127 09:16:24.998065 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9433322-e6a7-4643-b9f9-87853d285a08" (UID: "b9433322-e6a7-4643-b9f9-87853d285a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.029864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b9433322-e6a7-4643-b9f9-87853d285a08" (UID: "b9433322-e6a7-4643-b9f9-87853d285a08"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.066838 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9433322-e6a7-4643-b9f9-87853d285a08-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.066878 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.066893 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.066903 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9433322-e6a7-4643-b9f9-87853d285a08-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.066915 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcx88\" (UniqueName: \"kubernetes.io/projected/b9433322-e6a7-4643-b9f9-87853d285a08-kube-api-access-hcx88\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.081779 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.088001 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.095123 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.112498 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.112475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70822cc5-7296-435f-83fe-e69451683506","Type":"ContainerDied","Data":"4524a43090f7d16054fa098e8abeb0b5f63a40a4c913a4a44be068ecfbf4309d"} Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.113212 4985 scope.go:117] "RemoveContainer" containerID="fb31ff67e536607ecfb581b140d0f211bc2922abf65bed281ca514b360e3a6c6" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.125818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4","Type":"ContainerDied","Data":"37df5109ae43d60f9cef07000cf86f90fd22f43882cb826a13cd6592435ceb2f"} Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.126164 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.161294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b9433322-e6a7-4643-b9f9-87853d285a08","Type":"ContainerDied","Data":"79156917959ffa0b0216bd41b8b04454db016d063507b956be8d4592b97cc42b"} Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.163121 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.168840 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5x7\" (UniqueName: \"kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7\") pod \"70822cc5-7296-435f-83fe-e69451683506\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.168904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb4sl\" (UniqueName: \"kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl\") pod \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169016 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169065 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169122 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169159 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle\") pod \"70822cc5-7296-435f-83fe-e69451683506\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169211 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data\") pod \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169250 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data\") pod \"70822cc5-7296-435f-83fe-e69451683506\" (UID: \"70822cc5-7296-435f-83fe-e69451683506\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhtm9\" (UniqueName: \"kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169308 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169336 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle\") pod \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\" (UID: \"00c4b076-6757-4bdf-9ca1-ed8eff1f59c4\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.169375 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle\") pod \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\" (UID: \"984c0b5b-012f-4d5d-ba78-fe567ae73d59\") " Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.171561 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs" (OuterVolumeSpecName: "logs") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.178494 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"984c0b5b-012f-4d5d-ba78-fe567ae73d59","Type":"ContainerDied","Data":"ad2840930b4ead3ca965d7410576a788e5989b410405d4fe0606560dedbea69b"} Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.178603 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.182402 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl" (OuterVolumeSpecName: "kube-api-access-bb4sl") pod "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" (UID: "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4"). InnerVolumeSpecName "kube-api-access-bb4sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.184603 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9" (OuterVolumeSpecName: "kube-api-access-mhtm9") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "kube-api-access-mhtm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.189872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7" (OuterVolumeSpecName: "kube-api-access-rl5x7") pod "70822cc5-7296-435f-83fe-e69451683506" (UID: "70822cc5-7296-435f-83fe-e69451683506"). InnerVolumeSpecName "kube-api-access-rl5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.197810 4985 scope.go:117] "RemoveContainer" containerID="b95ebcd14b890819db79ae2305cde657d2379edb8ef6a850d8205fd059dcaa8e" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.221388 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70822cc5-7296-435f-83fe-e69451683506" (UID: "70822cc5-7296-435f-83fe-e69451683506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.225466 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.231147 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.231175 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" (UID: "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.236229 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data" (OuterVolumeSpecName: "config-data") pod "70822cc5-7296-435f-83fe-e69451683506" (UID: "70822cc5-7296-435f-83fe-e69451683506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.249798 4985 scope.go:117] "RemoveContainer" containerID="793c3194622ba9f315c3d47081946c5940cb6ca274437ed0f6cd9f5ffd06eeca" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.250726 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.253835 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data" (OuterVolumeSpecName: "config-data") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263010 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263419 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70822cc5-7296-435f-83fe-e69451683506" containerName="nova-scheduler-scheduler" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263441 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="70822cc5-7296-435f-83fe-e69451683506" containerName="nova-scheduler-scheduler" Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263465 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-metadata" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263473 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-metadata" Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263492 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263500 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263545 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-log" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263554 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-log" Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263567 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-api" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263574 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-api" Jan 27 09:16:25 crc kubenswrapper[4985]: E0127 09:16:25.263589 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-log" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263596 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-log" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263807 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-log" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263826 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="70822cc5-7296-435f-83fe-e69451683506" containerName="nova-scheduler-scheduler" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263840 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-api" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263856 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" containerName="nova-metadata-metadata" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263870 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" containerName="nova-api-log" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.263878 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" containerName="nova-cell1-conductor-conductor" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.265035 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.268159 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.268354 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.270949 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb4sl\" (UniqueName: \"kubernetes.io/projected/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-kube-api-access-bb4sl\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.270968 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.270977 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/984c0b5b-012f-4d5d-ba78-fe567ae73d59-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.270985 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.270994 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70822cc5-7296-435f-83fe-e69451683506-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.271005 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhtm9\" (UniqueName: \"kubernetes.io/projected/984c0b5b-012f-4d5d-ba78-fe567ae73d59-kube-api-access-mhtm9\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.271014 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.271022 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.271030 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5x7\" (UniqueName: \"kubernetes.io/projected/70822cc5-7296-435f-83fe-e69451683506-kube-api-access-rl5x7\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.271692 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.283663 4985 scope.go:117] "RemoveContainer" containerID="0765b4bc75572ea238220e8c8366b062f8a105a38565055e853b3ac0fd023e6a" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.283692 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data" (OuterVolumeSpecName: "config-data") pod "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" (UID: "00c4b076-6757-4bdf-9ca1-ed8eff1f59c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.283705 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.303601 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "984c0b5b-012f-4d5d-ba78-fe567ae73d59" (UID: "984c0b5b-012f-4d5d-ba78-fe567ae73d59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.315779 4985 scope.go:117] "RemoveContainer" containerID="93f73fd8c55dad032f70c0523733a3ff3c9b797ea7c0b371f3417417e4b8a6ee" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.335066 4985 scope.go:117] "RemoveContainer" containerID="32f60c16a3b36c3d74a6d2ed188b91486608d0d6e261b8b95c7d78880f2bc35d" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.372923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbd855-3465-4821-91d3-b49545447e36-logs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.373015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptk8l\" (UniqueName: \"kubernetes.io/projected/fdfbd855-3465-4821-91d3-b49545447e36-kube-api-access-ptk8l\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.373750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.373906 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-config-data\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.374170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.374469 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.374492 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.374506 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/984c0b5b-012f-4d5d-ba78-fe567ae73d59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.464972 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbd855-3465-4821-91d3-b49545447e36-logs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476295 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptk8l\" (UniqueName: \"kubernetes.io/projected/fdfbd855-3465-4821-91d3-b49545447e36-kube-api-access-ptk8l\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476389 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-config-data\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.476709 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbd855-3465-4821-91d3-b49545447e36-logs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.481420 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-config-data\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.483646 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.484865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.485910 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbd855-3465-4821-91d3-b49545447e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.499350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptk8l\" (UniqueName: \"kubernetes.io/projected/fdfbd855-3465-4821-91d3-b49545447e36-kube-api-access-ptk8l\") pod \"nova-metadata-0\" (UID: \"fdfbd855-3465-4821-91d3-b49545447e36\") " pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.507345 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.522121 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.523711 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.527212 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.539313 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.551693 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.563269 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.565164 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.571109 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.577913 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-config-data\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.577962 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdpp\" (UniqueName: \"kubernetes.io/projected/04de3704-b3c6-4693-baf3-c8e68335e2ed-kube-api-access-gtdpp\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.577993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.581890 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.586036 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.594878 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.602289 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.624219 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.626966 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.630379 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.630614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.630887 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.637789 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.679822 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d66059e-8e8d-44aa-bda5-abf143f9416d-logs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.679857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.679892 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.679912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.679966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680023 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62gv9\" (UniqueName: \"kubernetes.io/projected/8d66059e-8e8d-44aa-bda5-abf143f9416d-kube-api-access-62gv9\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-config-data\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-config-data\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtdpp\" (UniqueName: \"kubernetes.io/projected/04de3704-b3c6-4693-baf3-c8e68335e2ed-kube-api-access-gtdpp\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680165 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkgf\" (UniqueName: \"kubernetes.io/projected/e09de4ac-f34f-45d7-93cf-be4958284be0-kube-api-access-kbkgf\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.680188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.685170 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-config-data\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.691425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04de3704-b3c6-4693-baf3-c8e68335e2ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.708126 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtdpp\" (UniqueName: \"kubernetes.io/projected/04de3704-b3c6-4693-baf3-c8e68335e2ed-kube-api-access-gtdpp\") pod \"nova-scheduler-0\" (UID: \"04de3704-b3c6-4693-baf3-c8e68335e2ed\") " pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784592 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbkgf\" (UniqueName: \"kubernetes.io/projected/e09de4ac-f34f-45d7-93cf-be4958284be0-kube-api-access-kbkgf\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d66059e-8e8d-44aa-bda5-abf143f9416d-logs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784797 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784819 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.784983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62gv9\" (UniqueName: \"kubernetes.io/projected/8d66059e-8e8d-44aa-bda5-abf143f9416d-kube-api-access-62gv9\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.785065 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.785093 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-config-data\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.786026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d66059e-8e8d-44aa-bda5-abf143f9416d-logs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.790657 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.791041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-config-data\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.791095 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.792435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.793393 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d66059e-8e8d-44aa-bda5-abf143f9416d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.793443 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09de4ac-f34f-45d7-93cf-be4958284be0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.808819 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbkgf\" (UniqueName: \"kubernetes.io/projected/e09de4ac-f34f-45d7-93cf-be4958284be0-kube-api-access-kbkgf\") pod \"nova-cell1-conductor-0\" (UID: \"e09de4ac-f34f-45d7-93cf-be4958284be0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.818305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62gv9\" (UniqueName: \"kubernetes.io/projected/8d66059e-8e8d-44aa-bda5-abf143f9416d-kube-api-access-62gv9\") pod \"nova-api-0\" (UID: \"8d66059e-8e8d-44aa-bda5-abf143f9416d\") " pod="openstack/nova-api-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.849229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.885008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:25 crc kubenswrapper[4985]: I0127 09:16:25.947342 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.110475 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.129123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.198935 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04de3704-b3c6-4693-baf3-c8e68335e2ed","Type":"ContainerStarted","Data":"de4c7077dfa5f55eda93b6850db7eb039f4d29dffdf8275eb069cf1f441f49ca"} Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.209534 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdfbd855-3465-4821-91d3-b49545447e36","Type":"ContainerStarted","Data":"dbcb9e317ca7f777697fc43ffc852d114488269261513ffdeede110142b28549"} Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.433069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:16:26 crc kubenswrapper[4985]: W0127 09:16:26.434500 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode09de4ac_f34f_45d7_93cf_be4958284be0.slice/crio-b379f3bd3a16e5bb0d72144ec554f8bb01e0048b81d83665fd148436140244d3 WatchSource:0}: Error finding container b379f3bd3a16e5bb0d72144ec554f8bb01e0048b81d83665fd148436140244d3: Status 404 returned error can't find the container with id b379f3bd3a16e5bb0d72144ec554f8bb01e0048b81d83665fd148436140244d3 Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.466420 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c4b076-6757-4bdf-9ca1-ed8eff1f59c4" path="/var/lib/kubelet/pods/00c4b076-6757-4bdf-9ca1-ed8eff1f59c4/volumes" Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.467449 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70822cc5-7296-435f-83fe-e69451683506" path="/var/lib/kubelet/pods/70822cc5-7296-435f-83fe-e69451683506/volumes" Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.468187 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984c0b5b-012f-4d5d-ba78-fe567ae73d59" path="/var/lib/kubelet/pods/984c0b5b-012f-4d5d-ba78-fe567ae73d59/volumes" Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.469374 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9433322-e6a7-4643-b9f9-87853d285a08" path="/var/lib/kubelet/pods/b9433322-e6a7-4643-b9f9-87853d285a08/volumes" Jan 27 09:16:26 crc kubenswrapper[4985]: I0127 09:16:26.581445 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:16:26 crc kubenswrapper[4985]: W0127 09:16:26.586350 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d66059e_8e8d_44aa_bda5_abf143f9416d.slice/crio-b69ae36f6aa347dda8ec28bcf61c65661282c66e9546768b09ab9bfc92cd21fd WatchSource:0}: Error finding container b69ae36f6aa347dda8ec28bcf61c65661282c66e9546768b09ab9bfc92cd21fd: Status 404 returned error can't find the container with id b69ae36f6aa347dda8ec28bcf61c65661282c66e9546768b09ab9bfc92cd21fd Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.223402 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e09de4ac-f34f-45d7-93cf-be4958284be0","Type":"ContainerStarted","Data":"2b0c7e7c1923e0037ecb8fb74cd61790f5a8e70d4f47c0a96f3fe75983bf016f"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.223888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e09de4ac-f34f-45d7-93cf-be4958284be0","Type":"ContainerStarted","Data":"b379f3bd3a16e5bb0d72144ec554f8bb01e0048b81d83665fd148436140244d3"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.223938 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.246446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdfbd855-3465-4821-91d3-b49545447e36","Type":"ContainerStarted","Data":"39e1d3ecb078ea30d4730ee7c85b86a70b455e5723e7f224801e5e21d562b7f2"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.246551 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fdfbd855-3465-4821-91d3-b49545447e36","Type":"ContainerStarted","Data":"f8bd69c2b79aebeef279a171816d8bf4da59b66161d65c983b44a3e0c4ecdd3b"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.262800 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8d66059e-8e8d-44aa-bda5-abf143f9416d","Type":"ContainerStarted","Data":"493cb78010c3cbc0f7ca9a707d7c83b438bb327774f96c612e441e48cd748720"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.264119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8d66059e-8e8d-44aa-bda5-abf143f9416d","Type":"ContainerStarted","Data":"b69ae36f6aa347dda8ec28bcf61c65661282c66e9546768b09ab9bfc92cd21fd"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.264647 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.264622564 podStartE2EDuration="2.264622564s" podCreationTimestamp="2026-01-27 09:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:27.244888133 +0000 UTC m=+1371.535982984" watchObservedRunningTime="2026-01-27 09:16:27.264622564 +0000 UTC m=+1371.555717405" Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.285473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04de3704-b3c6-4693-baf3-c8e68335e2ed","Type":"ContainerStarted","Data":"f522eada8d0bb134705d661f92a173db459822a6c2c14bb6208de26ceb8f1f96"} Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.290140 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.290127745 podStartE2EDuration="2.290127745s" podCreationTimestamp="2026-01-27 09:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:27.275294778 +0000 UTC m=+1371.566389619" watchObservedRunningTime="2026-01-27 09:16:27.290127745 +0000 UTC m=+1371.581222586" Jan 27 09:16:27 crc kubenswrapper[4985]: I0127 09:16:27.310179 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.310162515 podStartE2EDuration="2.310162515s" podCreationTimestamp="2026-01-27 09:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:27.307176823 +0000 UTC m=+1371.598271664" watchObservedRunningTime="2026-01-27 09:16:27.310162515 +0000 UTC m=+1371.601257356" Jan 27 09:16:30 crc kubenswrapper[4985]: I0127 09:16:30.321421 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8d66059e-8e8d-44aa-bda5-abf143f9416d","Type":"ContainerStarted","Data":"d7431abda8fb91e20a1076a205f7617ce19c782c86cc812872bbbcea632b0c56"} Jan 27 09:16:30 crc kubenswrapper[4985]: I0127 09:16:30.354452 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.354428008 podStartE2EDuration="5.354428008s" podCreationTimestamp="2026-01-27 09:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:30.340308341 +0000 UTC m=+1374.631403192" watchObservedRunningTime="2026-01-27 09:16:30.354428008 +0000 UTC m=+1374.645522849" Jan 27 09:16:30 crc kubenswrapper[4985]: I0127 09:16:30.587087 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:16:30 crc kubenswrapper[4985]: I0127 09:16:30.587199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:16:30 crc kubenswrapper[4985]: I0127 09:16:30.850934 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.587379 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.587773 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.850250 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.877171 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.913284 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.948402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:16:35 crc kubenswrapper[4985]: I0127 09:16:35.948463 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:16:36 crc kubenswrapper[4985]: I0127 09:16:36.409649 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:16:36 crc kubenswrapper[4985]: I0127 09:16:36.608778 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fdfbd855-3465-4821-91d3-b49545447e36" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:16:36 crc kubenswrapper[4985]: I0127 09:16:36.608806 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fdfbd855-3465-4821-91d3-b49545447e36" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:16:36 crc kubenswrapper[4985]: I0127 09:16:36.961725 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8d66059e-8e8d-44aa-bda5-abf143f9416d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:16:36 crc kubenswrapper[4985]: I0127 09:16:36.961771 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8d66059e-8e8d-44aa-bda5-abf143f9416d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.619869 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.620415 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.627139 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.643658 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.960846 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.961768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.962134 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:16:45 crc kubenswrapper[4985]: I0127 09:16:45.974664 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:16:46 crc kubenswrapper[4985]: I0127 09:16:46.470278 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:16:46 crc kubenswrapper[4985]: I0127 09:16:46.476947 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:16:54 crc kubenswrapper[4985]: I0127 09:16:54.790881 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:16:55 crc kubenswrapper[4985]: I0127 09:16:55.863032 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:00 crc kubenswrapper[4985]: I0127 09:17:00.109814 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="rabbitmq" containerID="cri-o://4b1f967c83ed7b393f9fee284831f80d2118cee0c36a94006b08e047e2c83d7b" gracePeriod=604795 Jan 27 09:17:00 crc kubenswrapper[4985]: I0127 09:17:00.750305 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="rabbitmq" containerID="cri-o://c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54" gracePeriod=604796 Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.657296 4985 generic.go:334] "Generic (PLEG): container finished" podID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerID="4b1f967c83ed7b393f9fee284831f80d2118cee0c36a94006b08e047e2c83d7b" exitCode=0 Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.657419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerDied","Data":"4b1f967c83ed7b393f9fee284831f80d2118cee0c36a94006b08e047e2c83d7b"} Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.757998 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.847795 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwp59\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848096 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848112 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848132 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848193 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848243 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848261 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848287 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848311 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.848332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf\") pod \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\" (UID: \"6c6ceb6e-86fb-4658-93ed-8e66302f6396\") " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.856036 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.859248 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.865551 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.865806 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info" (OuterVolumeSpecName: "pod-info") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.867850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.888640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data" (OuterVolumeSpecName: "config-data") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.890317 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.895153 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.914296 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59" (OuterVolumeSpecName: "kube-api-access-fwp59") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "kube-api-access-fwp59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.931324 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf" (OuterVolumeSpecName: "server-conf") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951004 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951050 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwp59\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-kube-api-access-fwp59\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951064 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951075 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951086 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c6ceb6e-86fb-4658-93ed-8e66302f6396-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951097 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c6ceb6e-86fb-4658-93ed-8e66302f6396-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951109 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951142 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951154 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.951165 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c6ceb6e-86fb-4658-93ed-8e66302f6396-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.992150 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 09:17:06 crc kubenswrapper[4985]: I0127 09:17:06.996483 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6c6ceb6e-86fb-4658-93ed-8e66302f6396" (UID: "6c6ceb6e-86fb-4658-93ed-8e66302f6396"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.052869 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c6ceb6e-86fb-4658-93ed-8e66302f6396-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.052904 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.551272 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.668538 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.668992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.669329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.672849 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.672938 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.672998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sdq4\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673187 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673294 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret\") pod \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\" (UID: \"1c3a6629-6ee9-4274-aa58-1880fd4ae268\") " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.673685 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.675328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.680671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info" (OuterVolumeSpecName: "pod-info") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.684137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.684812 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.701140 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6c6ceb6e-86fb-4658-93ed-8e66302f6396","Type":"ContainerDied","Data":"af2849bdb115a5c32fd04d8be049596ef021d8ea9fa777707f2e97c0c0cc7363"} Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.701422 4985 scope.go:117] "RemoveContainer" containerID="4b1f967c83ed7b393f9fee284831f80d2118cee0c36a94006b08e047e2c83d7b" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.701669 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.712743 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data" (OuterVolumeSpecName: "config-data") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713350 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713375 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713418 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c3a6629-6ee9-4274-aa58-1880fd4ae268-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713434 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713447 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.713462 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.725241 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4" (OuterVolumeSpecName: "kube-api-access-9sdq4") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "kube-api-access-9sdq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.730316 4985 generic.go:334] "Generic (PLEG): container finished" podID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerID="c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54" exitCode=0 Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.730374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerDied","Data":"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54"} Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.730408 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c3a6629-6ee9-4274-aa58-1880fd4ae268","Type":"ContainerDied","Data":"47b2bc497829bd544ccc25547098168477ad896767fd14e6a4ee5d37df163666"} Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.730560 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.733444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.747142 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.799152 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf" (OuterVolumeSpecName: "server-conf") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.834348 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c3a6629-6ee9-4274-aa58-1880fd4ae268-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.834399 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sdq4\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-kube-api-access-9sdq4\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.834438 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.834453 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c3a6629-6ee9-4274-aa58-1880fd4ae268-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.835363 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.849207 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.858124 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.863236 4985 scope.go:117] "RemoveContainer" containerID="cea1414e3344dd8ffd89d82148d82d04e5425f1dd069adc9bd7855c688b77608" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.879829 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:17:07 crc kubenswrapper[4985]: E0127 09:17:07.880462 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880486 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: E0127 09:17:07.880542 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880550 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: E0127 09:17:07.880567 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="setup-container" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880575 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="setup-container" Jan 27 09:17:07 crc kubenswrapper[4985]: E0127 09:17:07.880604 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="setup-container" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880611 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="setup-container" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880848 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.880876 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" containerName="rabbitmq" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.882296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.889123 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.889347 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.889534 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.889714 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.889832 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.890283 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.890588 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p5fb2" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.947760 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.953196 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.973217 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1c3a6629-6ee9-4274-aa58-1880fd4ae268" (UID: "1c3a6629-6ee9-4274-aa58-1880fd4ae268"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:07 crc kubenswrapper[4985]: I0127 09:17:07.993747 4985 scope.go:117] "RemoveContainer" containerID="c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.027858 4985 scope.go:117] "RemoveContainer" containerID="b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.050660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.050822 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.050986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051065 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf353db-18e8-4814-835d-228e9d0aaec6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051213 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051367 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051544 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46dp2\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-kube-api-access-46dp2\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051724 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051832 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.051997 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.052116 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf353db-18e8-4814-835d-228e9d0aaec6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.052232 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c3a6629-6ee9-4274-aa58-1880fd4ae268-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.078148 4985 scope.go:117] "RemoveContainer" containerID="c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54" Jan 27 09:17:08 crc kubenswrapper[4985]: E0127 09:17:08.089811 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54\": container with ID starting with c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54 not found: ID does not exist" containerID="c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.089915 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54"} err="failed to get container status \"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54\": rpc error: code = NotFound desc = could not find container \"c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54\": container with ID starting with c5a680f38b59cab040f2d532c022fb0a2d6ca690f4fb49a1994ff6bd6fe6fb54 not found: ID does not exist" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.089945 4985 scope.go:117] "RemoveContainer" containerID="b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e" Jan 27 09:17:08 crc kubenswrapper[4985]: E0127 09:17:08.090453 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e\": container with ID starting with b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e not found: ID does not exist" containerID="b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.090499 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e"} err="failed to get container status \"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e\": rpc error: code = NotFound desc = could not find container \"b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e\": container with ID starting with b8cbb52e43286d41a8fc0f6dd52e4a0a4af64d7ac504aaa9ff6dd5929b0db17e not found: ID does not exist" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.096215 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.108401 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.127610 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.130735 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.135384 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.135665 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.135724 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.135762 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.140066 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-86gqj" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.140175 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.140367 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.151805 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154352 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf353db-18e8-4814-835d-228e9d0aaec6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154414 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154543 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf353db-18e8-4814-835d-228e9d0aaec6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154683 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46dp2\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-kube-api-access-46dp2\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154717 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.154776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.155880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.157302 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.158596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-config-data\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.159826 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf353db-18e8-4814-835d-228e9d0aaec6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.160226 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.160388 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.175339 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.175931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.176120 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf353db-18e8-4814-835d-228e9d0aaec6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.176135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf353db-18e8-4814-835d-228e9d0aaec6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.181494 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46dp2\" (UniqueName: \"kubernetes.io/projected/6bf353db-18e8-4814-835d-228e9d0aaec6-kube-api-access-46dp2\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.197336 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6bf353db-18e8-4814-835d-228e9d0aaec6\") " pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.257880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.257988 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.258013 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.258206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.258371 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.258432 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.258669 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8hl\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-kube-api-access-4g8hl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.259150 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.259219 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.259327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.259406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.278205 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361241 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361260 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361300 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g8hl\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-kube-api-access-4g8hl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361442 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.361490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.362532 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.362832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.363083 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.363457 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.363736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.363745 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.368297 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.368571 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.368738 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.368939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.381502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g8hl\" (UniqueName: \"kubernetes.io/projected/f08b3701-2ee6-4de9-8d6b-8191a8ff95d3-kube-api-access-4g8hl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.432475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.458128 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.466502 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3a6629-6ee9-4274-aa58-1880fd4ae268" path="/var/lib/kubelet/pods/1c3a6629-6ee9-4274-aa58-1880fd4ae268/volumes" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.469040 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6ceb6e-86fb-4658-93ed-8e66302f6396" path="/var/lib/kubelet/pods/6c6ceb6e-86fb-4658-93ed-8e66302f6396/volumes" Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.778973 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: I0127 09:17:08.975796 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:17:08 crc kubenswrapper[4985]: W0127 09:17:08.977188 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf08b3701_2ee6_4de9_8d6b_8191a8ff95d3.slice/crio-46ab58cb3ffa07d672daa918d43322d0b2aa94d31d0ab72b58d0d49699477a89 WatchSource:0}: Error finding container 46ab58cb3ffa07d672daa918d43322d0b2aa94d31d0ab72b58d0d49699477a89: Status 404 returned error can't find the container with id 46ab58cb3ffa07d672daa918d43322d0b2aa94d31d0ab72b58d0d49699477a89 Jan 27 09:17:09 crc kubenswrapper[4985]: I0127 09:17:09.766422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6bf353db-18e8-4814-835d-228e9d0aaec6","Type":"ContainerStarted","Data":"9e12e3c9896f822cb495132b11679516444f020db58b7d1bf49cb95a1ab615f9"} Jan 27 09:17:09 crc kubenswrapper[4985]: I0127 09:17:09.768630 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3","Type":"ContainerStarted","Data":"46ab58cb3ffa07d672daa918d43322d0b2aa94d31d0ab72b58d0d49699477a89"} Jan 27 09:17:10 crc kubenswrapper[4985]: I0127 09:17:10.783192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6bf353db-18e8-4814-835d-228e9d0aaec6","Type":"ContainerStarted","Data":"1d327342504aad0fa9826531bd895fff2f16e41926516ca9620ff634b9656eff"} Jan 27 09:17:10 crc kubenswrapper[4985]: I0127 09:17:10.787165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3","Type":"ContainerStarted","Data":"d556f72dedf051e9ae778351fc0de5c805d7e0f4de1155f69344ddf47829d2c4"} Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.123628 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8595b94875-djdnf"] Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.125406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.127756 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.144116 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-djdnf"] Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xl5s\" (UniqueName: \"kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222267 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222290 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222393 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.222491 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.225049 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-djdnf"] Jan 27 09:17:11 crc kubenswrapper[4985]: E0127 09:17:11.225712 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-7xl5s openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-8595b94875-djdnf" podUID="51299650-d293-4d85-b83e-f596a7a0b1c2" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.254191 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-wkvsf"] Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.255818 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.278309 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-wkvsf"] Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324100 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324221 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-config\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324535 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xl5s\" (UniqueName: \"kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324800 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324907 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.324973 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg4fh\" (UniqueName: \"kubernetes.io/projected/643859d4-9ed0-4fc7-9a10-01d0b0e19410-kube-api-access-jg4fh\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.325367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.325405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.325897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.325921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.326030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.326153 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.346455 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xl5s\" (UniqueName: \"kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s\") pod \"dnsmasq-dns-8595b94875-djdnf\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.426959 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg4fh\" (UniqueName: \"kubernetes.io/projected/643859d4-9ed0-4fc7-9a10-01d0b0e19410-kube-api-access-jg4fh\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427327 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-config\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427716 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427846 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.427951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.428243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.428344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-config\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.429185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.429233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.429390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.429730 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643859d4-9ed0-4fc7-9a10-01d0b0e19410-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.447309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg4fh\" (UniqueName: \"kubernetes.io/projected/643859d4-9ed0-4fc7-9a10-01d0b0e19410-kube-api-access-jg4fh\") pod \"dnsmasq-dns-d7b79b84c-wkvsf\" (UID: \"643859d4-9ed0-4fc7-9a10-01d0b0e19410\") " pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.572715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.796441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.809669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.828804 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.828863 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xl5s\" (UniqueName: \"kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835599 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835635 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.835966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.836008 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb\") pod \"51299650-d293-4d85-b83e-f596a7a0b1c2\" (UID: \"51299650-d293-4d85-b83e-f596a7a0b1c2\") " Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.838836 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.839255 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.839455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config" (OuterVolumeSpecName: "config") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.839696 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.839733 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.841064 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.842078 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s" (OuterVolumeSpecName: "kube-api-access-7xl5s") pod "51299650-d293-4d85-b83e-f596a7a0b1c2" (UID: "51299650-d293-4d85-b83e-f596a7a0b1c2"). InnerVolumeSpecName "kube-api-access-7xl5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938385 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938431 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938443 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938453 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xl5s\" (UniqueName: \"kubernetes.io/projected/51299650-d293-4d85-b83e-f596a7a0b1c2-kube-api-access-7xl5s\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938464 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938472 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4985]: I0127 09:17:11.938482 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51299650-d293-4d85-b83e-f596a7a0b1c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:12 crc kubenswrapper[4985]: I0127 09:17:12.050779 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-wkvsf"] Jan 27 09:17:12 crc kubenswrapper[4985]: W0127 09:17:12.055774 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643859d4_9ed0_4fc7_9a10_01d0b0e19410.slice/crio-153573186c587ce00c851164443a9e68bb6fa5d8a43599be8ab999ce911d6951 WatchSource:0}: Error finding container 153573186c587ce00c851164443a9e68bb6fa5d8a43599be8ab999ce911d6951: Status 404 returned error can't find the container with id 153573186c587ce00c851164443a9e68bb6fa5d8a43599be8ab999ce911d6951 Jan 27 09:17:12 crc kubenswrapper[4985]: I0127 09:17:12.804339 4985 generic.go:334] "Generic (PLEG): container finished" podID="643859d4-9ed0-4fc7-9a10-01d0b0e19410" containerID="ca03e34f74b100b6e380aebef7f84008cdcdaea041580a58189e325c28a6caa8" exitCode=0 Jan 27 09:17:12 crc kubenswrapper[4985]: I0127 09:17:12.804680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-djdnf" Jan 27 09:17:12 crc kubenswrapper[4985]: I0127 09:17:12.804446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" event={"ID":"643859d4-9ed0-4fc7-9a10-01d0b0e19410","Type":"ContainerDied","Data":"ca03e34f74b100b6e380aebef7f84008cdcdaea041580a58189e325c28a6caa8"} Jan 27 09:17:12 crc kubenswrapper[4985]: I0127 09:17:12.804915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" event={"ID":"643859d4-9ed0-4fc7-9a10-01d0b0e19410","Type":"ContainerStarted","Data":"153573186c587ce00c851164443a9e68bb6fa5d8a43599be8ab999ce911d6951"} Jan 27 09:17:13 crc kubenswrapper[4985]: I0127 09:17:13.036532 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-djdnf"] Jan 27 09:17:13 crc kubenswrapper[4985]: I0127 09:17:13.048012 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-djdnf"] Jan 27 09:17:13 crc kubenswrapper[4985]: I0127 09:17:13.816709 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" event={"ID":"643859d4-9ed0-4fc7-9a10-01d0b0e19410","Type":"ContainerStarted","Data":"7414d5fe3b523a37dd52a7561f1332b50f390a2fe7dbf7ec74ae763e30c2378b"} Jan 27 09:17:13 crc kubenswrapper[4985]: I0127 09:17:13.817152 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:13 crc kubenswrapper[4985]: I0127 09:17:13.840964 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" podStartSLOduration=2.8409470690000003 podStartE2EDuration="2.840947069s" podCreationTimestamp="2026-01-27 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:13.836004603 +0000 UTC m=+1418.127099454" watchObservedRunningTime="2026-01-27 09:17:13.840947069 +0000 UTC m=+1418.132041910" Jan 27 09:17:14 crc kubenswrapper[4985]: I0127 09:17:14.462029 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51299650-d293-4d85-b83e-f596a7a0b1c2" path="/var/lib/kubelet/pods/51299650-d293-4d85-b83e-f596a7a0b1c2/volumes" Jan 27 09:17:21 crc kubenswrapper[4985]: I0127 09:17:21.574836 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d7b79b84c-wkvsf" Jan 27 09:17:21 crc kubenswrapper[4985]: I0127 09:17:21.637879 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:17:21 crc kubenswrapper[4985]: I0127 09:17:21.638121 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="dnsmasq-dns" containerID="cri-o://d696e52692aeff4b4cf49ea8443e8655b2e8d270dace94e86cc0537da252b2c1" gracePeriod=10 Jan 27 09:17:21 crc kubenswrapper[4985]: I0127 09:17:21.895945 4985 generic.go:334] "Generic (PLEG): container finished" podID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerID="d696e52692aeff4b4cf49ea8443e8655b2e8d270dace94e86cc0537da252b2c1" exitCode=0 Jan 27 09:17:21 crc kubenswrapper[4985]: I0127 09:17:21.895990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" event={"ID":"7f7f4d89-0251-4299-b1e5-24f0f160ba5c","Type":"ContainerDied","Data":"d696e52692aeff4b4cf49ea8443e8655b2e8d270dace94e86cc0537da252b2c1"} Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.604411 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714219 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714421 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714486 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz7jm\" (UniqueName: \"kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714542 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.714600 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc\") pod \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\" (UID: \"7f7f4d89-0251-4299-b1e5-24f0f160ba5c\") " Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.720393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm" (OuterVolumeSpecName: "kube-api-access-mz7jm") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "kube-api-access-mz7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.784601 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.786282 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.789423 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config" (OuterVolumeSpecName: "config") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.799441 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.800437 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f7f4d89-0251-4299-b1e5-24f0f160ba5c" (UID: "7f7f4d89-0251-4299-b1e5-24f0f160ba5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818039 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818075 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818086 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818096 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz7jm\" (UniqueName: \"kubernetes.io/projected/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-kube-api-access-mz7jm\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818106 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.818115 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7f4d89-0251-4299-b1e5-24f0f160ba5c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.905764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" event={"ID":"7f7f4d89-0251-4299-b1e5-24f0f160ba5c","Type":"ContainerDied","Data":"1c7ffc4e8d7c7dd1290d9f002ffeffc4406746a3e91d1d169aa53fda60b9c2f7"} Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.905830 4985 scope.go:117] "RemoveContainer" containerID="d696e52692aeff4b4cf49ea8443e8655b2e8d270dace94e86cc0537da252b2c1" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.905837 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snltk" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.934956 4985 scope.go:117] "RemoveContainer" containerID="edb6a11fc2d4592aaa4b4c491383e589966351075164f9353370937e89ed2102" Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.939085 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:17:22 crc kubenswrapper[4985]: I0127 09:17:22.948133 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snltk"] Jan 27 09:17:24 crc kubenswrapper[4985]: I0127 09:17:24.462130 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" path="/var/lib/kubelet/pods/7f7f4d89-0251-4299-b1e5-24f0f160ba5c/volumes" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.319627 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5"] Jan 27 09:17:30 crc kubenswrapper[4985]: E0127 09:17:30.320549 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="dnsmasq-dns" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.320562 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="dnsmasq-dns" Jan 27 09:17:30 crc kubenswrapper[4985]: E0127 09:17:30.320576 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="init" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.320584 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="init" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.320769 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f7f4d89-0251-4299-b1e5-24f0f160ba5c" containerName="dnsmasq-dns" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.321405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.325494 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.325644 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.325675 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.326535 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.351667 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5"] Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.456430 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.456572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.456640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7tn2\" (UniqueName: \"kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.456659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.558005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.558094 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7tn2\" (UniqueName: \"kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.558118 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.558355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.565208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.566019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.566190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.576865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7tn2\" (UniqueName: \"kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:30 crc kubenswrapper[4985]: I0127 09:17:30.643314 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:31 crc kubenswrapper[4985]: I0127 09:17:31.231333 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5"] Jan 27 09:17:31 crc kubenswrapper[4985]: I0127 09:17:31.995823 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" event={"ID":"44e0b0a4-8eba-49d5-9408-2a6400e0cedf","Type":"ContainerStarted","Data":"f81dc03448f11ac35a7a6f8775c73ef5cd1c319df7fd57518057a2d14005f187"} Jan 27 09:17:39 crc kubenswrapper[4985]: I0127 09:17:39.187053 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:17:40 crc kubenswrapper[4985]: I0127 09:17:40.079121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" event={"ID":"44e0b0a4-8eba-49d5-9408-2a6400e0cedf","Type":"ContainerStarted","Data":"a9d869ade3c9921fc4f7f8bcafe3bd8c51abe85f1f9629717e09e1b3984c41d8"} Jan 27 09:17:40 crc kubenswrapper[4985]: I0127 09:17:40.098788 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" podStartSLOduration=2.15247123 podStartE2EDuration="10.098764926s" podCreationTimestamp="2026-01-27 09:17:30 +0000 UTC" firstStartedPulling="2026-01-27 09:17:31.237928675 +0000 UTC m=+1435.529023516" lastFinishedPulling="2026-01-27 09:17:39.184222381 +0000 UTC m=+1443.475317212" observedRunningTime="2026-01-27 09:17:40.096155294 +0000 UTC m=+1444.387250155" watchObservedRunningTime="2026-01-27 09:17:40.098764926 +0000 UTC m=+1444.389859777" Jan 27 09:17:41 crc kubenswrapper[4985]: I0127 09:17:41.809958 4985 scope.go:117] "RemoveContainer" containerID="9706e1d4658a44c74fff4026a1483811e63023e4aa43d6a7616de6b6caa0e1ba" Jan 27 09:17:41 crc kubenswrapper[4985]: I0127 09:17:41.829088 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:17:41 crc kubenswrapper[4985]: I0127 09:17:41.829142 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:17:41 crc kubenswrapper[4985]: I0127 09:17:41.836141 4985 scope.go:117] "RemoveContainer" containerID="5df0690db1a644bbc456cc4dac180784a7c72caeaa472edbdd12688c98a8acfc" Jan 27 09:17:41 crc kubenswrapper[4985]: I0127 09:17:41.881732 4985 scope.go:117] "RemoveContainer" containerID="21fb31c114570373f2aefe08ae052a814b7f260605d4f1b18fb18d7b651763b1" Jan 27 09:17:43 crc kubenswrapper[4985]: I0127 09:17:43.105628 4985 generic.go:334] "Generic (PLEG): container finished" podID="f08b3701-2ee6-4de9-8d6b-8191a8ff95d3" containerID="d556f72dedf051e9ae778351fc0de5c805d7e0f4de1155f69344ddf47829d2c4" exitCode=0 Jan 27 09:17:43 crc kubenswrapper[4985]: I0127 09:17:43.105728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3","Type":"ContainerDied","Data":"d556f72dedf051e9ae778351fc0de5c805d7e0f4de1155f69344ddf47829d2c4"} Jan 27 09:17:43 crc kubenswrapper[4985]: I0127 09:17:43.108298 4985 generic.go:334] "Generic (PLEG): container finished" podID="6bf353db-18e8-4814-835d-228e9d0aaec6" containerID="1d327342504aad0fa9826531bd895fff2f16e41926516ca9620ff634b9656eff" exitCode=0 Jan 27 09:17:43 crc kubenswrapper[4985]: I0127 09:17:43.108365 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6bf353db-18e8-4814-835d-228e9d0aaec6","Type":"ContainerDied","Data":"1d327342504aad0fa9826531bd895fff2f16e41926516ca9620ff634b9656eff"} Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.118797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f08b3701-2ee6-4de9-8d6b-8191a8ff95d3","Type":"ContainerStarted","Data":"8efb60c19c6facdfd27f999033ad7fe4534bad946faee8b915e19c5dfdb5993e"} Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.119245 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.120736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6bf353db-18e8-4814-835d-228e9d0aaec6","Type":"ContainerStarted","Data":"503e0c27ed181964093bb1ae0f5696bdf3416705321c7af09041a241662cc753"} Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.120995 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.161715 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.161700994 podStartE2EDuration="36.161700994s" podCreationTimestamp="2026-01-27 09:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:44.140227745 +0000 UTC m=+1448.431322596" watchObservedRunningTime="2026-01-27 09:17:44.161700994 +0000 UTC m=+1448.452795835" Jan 27 09:17:44 crc kubenswrapper[4985]: I0127 09:17:44.165739 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.165726685 podStartE2EDuration="37.165726685s" podCreationTimestamp="2026-01-27 09:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:44.159489544 +0000 UTC m=+1448.450584375" watchObservedRunningTime="2026-01-27 09:17:44.165726685 +0000 UTC m=+1448.456821526" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.041973 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.044212 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.062756 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.201835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4xm\" (UniqueName: \"kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.202218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.202436 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.304305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.304952 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.304898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4xm\" (UniqueName: \"kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.305490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.305885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.325617 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4xm\" (UniqueName: \"kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm\") pod \"redhat-marketplace-q22nh\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.401647 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:47 crc kubenswrapper[4985]: I0127 09:17:47.952156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:17:48 crc kubenswrapper[4985]: I0127 09:17:48.157221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerStarted","Data":"772e77565884b4214b3abcec3c3c6cea53f9d5e2d741f049d77227ce231a6e24"} Jan 27 09:17:49 crc kubenswrapper[4985]: I0127 09:17:49.167677 4985 generic.go:334] "Generic (PLEG): container finished" podID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerID="98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3" exitCode=0 Jan 27 09:17:49 crc kubenswrapper[4985]: I0127 09:17:49.167769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerDied","Data":"98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3"} Jan 27 09:17:51 crc kubenswrapper[4985]: I0127 09:17:51.189355 4985 generic.go:334] "Generic (PLEG): container finished" podID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerID="637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8" exitCode=0 Jan 27 09:17:51 crc kubenswrapper[4985]: I0127 09:17:51.190655 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerDied","Data":"637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8"} Jan 27 09:17:51 crc kubenswrapper[4985]: I0127 09:17:51.197126 4985 generic.go:334] "Generic (PLEG): container finished" podID="44e0b0a4-8eba-49d5-9408-2a6400e0cedf" containerID="a9d869ade3c9921fc4f7f8bcafe3bd8c51abe85f1f9629717e09e1b3984c41d8" exitCode=0 Jan 27 09:17:51 crc kubenswrapper[4985]: I0127 09:17:51.197162 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" event={"ID":"44e0b0a4-8eba-49d5-9408-2a6400e0cedf","Type":"ContainerDied","Data":"a9d869ade3c9921fc4f7f8bcafe3bd8c51abe85f1f9629717e09e1b3984c41d8"} Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.690263 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.816111 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam\") pod \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.816169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7tn2\" (UniqueName: \"kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2\") pod \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.816310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory\") pod \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.816369 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle\") pod \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\" (UID: \"44e0b0a4-8eba-49d5-9408-2a6400e0cedf\") " Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.822535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2" (OuterVolumeSpecName: "kube-api-access-f7tn2") pod "44e0b0a4-8eba-49d5-9408-2a6400e0cedf" (UID: "44e0b0a4-8eba-49d5-9408-2a6400e0cedf"). InnerVolumeSpecName "kube-api-access-f7tn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.822607 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "44e0b0a4-8eba-49d5-9408-2a6400e0cedf" (UID: "44e0b0a4-8eba-49d5-9408-2a6400e0cedf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.846800 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory" (OuterVolumeSpecName: "inventory") pod "44e0b0a4-8eba-49d5-9408-2a6400e0cedf" (UID: "44e0b0a4-8eba-49d5-9408-2a6400e0cedf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.847765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "44e0b0a4-8eba-49d5-9408-2a6400e0cedf" (UID: "44e0b0a4-8eba-49d5-9408-2a6400e0cedf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.918086 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.918120 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7tn2\" (UniqueName: \"kubernetes.io/projected/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-kube-api-access-f7tn2\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.918131 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:52 crc kubenswrapper[4985]: I0127 09:17:52.918141 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e0b0a4-8eba-49d5-9408-2a6400e0cedf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.217820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" event={"ID":"44e0b0a4-8eba-49d5-9408-2a6400e0cedf","Type":"ContainerDied","Data":"f81dc03448f11ac35a7a6f8775c73ef5cd1c319df7fd57518057a2d14005f187"} Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.218320 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81dc03448f11ac35a7a6f8775c73ef5cd1c319df7fd57518057a2d14005f187" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.217993 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.334285 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd"] Jan 27 09:17:53 crc kubenswrapper[4985]: E0127 09:17:53.334884 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e0b0a4-8eba-49d5-9408-2a6400e0cedf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.334910 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e0b0a4-8eba-49d5-9408-2a6400e0cedf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.335225 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e0b0a4-8eba-49d5-9408-2a6400e0cedf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.336116 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.338463 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.338729 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.338941 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.339141 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.347459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd"] Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.425183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjq8\" (UniqueName: \"kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.425228 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.425415 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.527255 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.527304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjq8\" (UniqueName: \"kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.527407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.533242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.533722 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.545028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjq8\" (UniqueName: \"kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zpmfd\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:53 crc kubenswrapper[4985]: I0127 09:17:53.653209 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:54 crc kubenswrapper[4985]: I0127 09:17:54.220256 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd"] Jan 27 09:17:54 crc kubenswrapper[4985]: I0127 09:17:54.232100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerStarted","Data":"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96"} Jan 27 09:17:54 crc kubenswrapper[4985]: I0127 09:17:54.233704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" event={"ID":"d3c88468-ade0-4b44-8824-f0564b217b93","Type":"ContainerStarted","Data":"69b9c46884281b2667dccd89620761840f50668f5ecfc82fa67c40c02602fb50"} Jan 27 09:17:54 crc kubenswrapper[4985]: I0127 09:17:54.252955 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q22nh" podStartSLOduration=3.5691523590000003 podStartE2EDuration="7.252936695s" podCreationTimestamp="2026-01-27 09:17:47 +0000 UTC" firstStartedPulling="2026-01-27 09:17:49.170046146 +0000 UTC m=+1453.461140987" lastFinishedPulling="2026-01-27 09:17:52.853830482 +0000 UTC m=+1457.144925323" observedRunningTime="2026-01-27 09:17:54.246344913 +0000 UTC m=+1458.537439754" watchObservedRunningTime="2026-01-27 09:17:54.252936695 +0000 UTC m=+1458.544031536" Jan 27 09:17:55 crc kubenswrapper[4985]: I0127 09:17:55.244237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" event={"ID":"d3c88468-ade0-4b44-8824-f0564b217b93","Type":"ContainerStarted","Data":"cdde092784915b4584c5c20518dd8650a16629bbc85dfd5ba177fa5cb4ad432e"} Jan 27 09:17:55 crc kubenswrapper[4985]: I0127 09:17:55.268539 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" podStartSLOduration=1.899665885 podStartE2EDuration="2.268506025s" podCreationTimestamp="2026-01-27 09:17:53 +0000 UTC" firstStartedPulling="2026-01-27 09:17:54.217893922 +0000 UTC m=+1458.508988763" lastFinishedPulling="2026-01-27 09:17:54.586734062 +0000 UTC m=+1458.877828903" observedRunningTime="2026-01-27 09:17:55.26323735 +0000 UTC m=+1459.554332191" watchObservedRunningTime="2026-01-27 09:17:55.268506025 +0000 UTC m=+1459.559600866" Jan 27 09:17:57 crc kubenswrapper[4985]: I0127 09:17:57.266476 4985 generic.go:334] "Generic (PLEG): container finished" podID="d3c88468-ade0-4b44-8824-f0564b217b93" containerID="cdde092784915b4584c5c20518dd8650a16629bbc85dfd5ba177fa5cb4ad432e" exitCode=0 Jan 27 09:17:57 crc kubenswrapper[4985]: I0127 09:17:57.266559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" event={"ID":"d3c88468-ade0-4b44-8824-f0564b217b93","Type":"ContainerDied","Data":"cdde092784915b4584c5c20518dd8650a16629bbc85dfd5ba177fa5cb4ad432e"} Jan 27 09:17:57 crc kubenswrapper[4985]: I0127 09:17:57.402316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:57 crc kubenswrapper[4985]: I0127 09:17:57.402373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:57 crc kubenswrapper[4985]: I0127 09:17:57.448850 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.282799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.336190 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.411931 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.462764 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.773446 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.835649 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjq8\" (UniqueName: \"kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8\") pod \"d3c88468-ade0-4b44-8824-f0564b217b93\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.835745 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory\") pod \"d3c88468-ade0-4b44-8824-f0564b217b93\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.836824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam\") pod \"d3c88468-ade0-4b44-8824-f0564b217b93\" (UID: \"d3c88468-ade0-4b44-8824-f0564b217b93\") " Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.856988 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8" (OuterVolumeSpecName: "kube-api-access-dmjq8") pod "d3c88468-ade0-4b44-8824-f0564b217b93" (UID: "d3c88468-ade0-4b44-8824-f0564b217b93"). InnerVolumeSpecName "kube-api-access-dmjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.869861 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory" (OuterVolumeSpecName: "inventory") pod "d3c88468-ade0-4b44-8824-f0564b217b93" (UID: "d3c88468-ade0-4b44-8824-f0564b217b93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.884159 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3c88468-ade0-4b44-8824-f0564b217b93" (UID: "d3c88468-ade0-4b44-8824-f0564b217b93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.939266 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.939298 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmjq8\" (UniqueName: \"kubernetes.io/projected/d3c88468-ade0-4b44-8824-f0564b217b93-kube-api-access-dmjq8\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:58 crc kubenswrapper[4985]: I0127 09:17:58.939312 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c88468-ade0-4b44-8824-f0564b217b93-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.284348 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" event={"ID":"d3c88468-ade0-4b44-8824-f0564b217b93","Type":"ContainerDied","Data":"69b9c46884281b2667dccd89620761840f50668f5ecfc82fa67c40c02602fb50"} Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.284381 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zpmfd" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.284392 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b9c46884281b2667dccd89620761840f50668f5ecfc82fa67c40c02602fb50" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.354072 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5"] Jan 27 09:17:59 crc kubenswrapper[4985]: E0127 09:17:59.354505 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c88468-ade0-4b44-8824-f0564b217b93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.354542 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c88468-ade0-4b44-8824-f0564b217b93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.354778 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c88468-ade0-4b44-8824-f0564b217b93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.355428 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.357388 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.357666 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.357675 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.357802 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.373030 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5"] Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.449185 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.449269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.449786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9c8\" (UniqueName: \"kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.449970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.551422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.551493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.551555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.551655 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9c8\" (UniqueName: \"kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.556448 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.556754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.557017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.569241 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9c8\" (UniqueName: \"kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:17:59 crc kubenswrapper[4985]: I0127 09:17:59.718188 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.253578 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5"] Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.293593 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" event={"ID":"ca890687-26ad-46f0-9ca5-8c245c6f4b22","Type":"ContainerStarted","Data":"c615ac8ec308ea3536bbfc562d8388eca599a649f2f0bf78eb7776d4f25344c1"} Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.293775 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q22nh" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="registry-server" containerID="cri-o://de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96" gracePeriod=2 Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.866391 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.980096 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities\") pod \"8596c50f-b932-44d4-adb0-b165f1c6b042\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.980150 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content\") pod \"8596c50f-b932-44d4-adb0-b165f1c6b042\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.980204 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn4xm\" (UniqueName: \"kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm\") pod \"8596c50f-b932-44d4-adb0-b165f1c6b042\" (UID: \"8596c50f-b932-44d4-adb0-b165f1c6b042\") " Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.981328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities" (OuterVolumeSpecName: "utilities") pod "8596c50f-b932-44d4-adb0-b165f1c6b042" (UID: "8596c50f-b932-44d4-adb0-b165f1c6b042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:18:00 crc kubenswrapper[4985]: I0127 09:18:00.995481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm" (OuterVolumeSpecName: "kube-api-access-vn4xm") pod "8596c50f-b932-44d4-adb0-b165f1c6b042" (UID: "8596c50f-b932-44d4-adb0-b165f1c6b042"). InnerVolumeSpecName "kube-api-access-vn4xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.009693 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8596c50f-b932-44d4-adb0-b165f1c6b042" (UID: "8596c50f-b932-44d4-adb0-b165f1c6b042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.082040 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.082079 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8596c50f-b932-44d4-adb0-b165f1c6b042-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.082090 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn4xm\" (UniqueName: \"kubernetes.io/projected/8596c50f-b932-44d4-adb0-b165f1c6b042-kube-api-access-vn4xm\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.305105 4985 generic.go:334] "Generic (PLEG): container finished" podID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerID="de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96" exitCode=0 Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.305183 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerDied","Data":"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96"} Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.305229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q22nh" event={"ID":"8596c50f-b932-44d4-adb0-b165f1c6b042","Type":"ContainerDied","Data":"772e77565884b4214b3abcec3c3c6cea53f9d5e2d741f049d77227ce231a6e24"} Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.305251 4985 scope.go:117] "RemoveContainer" containerID="de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.305357 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q22nh" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.307970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" event={"ID":"ca890687-26ad-46f0-9ca5-8c245c6f4b22","Type":"ContainerStarted","Data":"04f7d28713644f0e8b3f31ce2d68c816963541cf0190386ea15a5271baa94bfc"} Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.324969 4985 scope.go:117] "RemoveContainer" containerID="637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.332067 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" podStartSLOduration=1.541804826 podStartE2EDuration="2.332050178s" podCreationTimestamp="2026-01-27 09:17:59 +0000 UTC" firstStartedPulling="2026-01-27 09:18:00.258143115 +0000 UTC m=+1464.549237956" lastFinishedPulling="2026-01-27 09:18:01.048388467 +0000 UTC m=+1465.339483308" observedRunningTime="2026-01-27 09:18:01.323674248 +0000 UTC m=+1465.614769089" watchObservedRunningTime="2026-01-27 09:18:01.332050178 +0000 UTC m=+1465.623145019" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.352285 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.366950 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q22nh"] Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.373248 4985 scope.go:117] "RemoveContainer" containerID="98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.403135 4985 scope.go:117] "RemoveContainer" containerID="de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96" Jan 27 09:18:01 crc kubenswrapper[4985]: E0127 09:18:01.403756 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96\": container with ID starting with de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96 not found: ID does not exist" containerID="de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.403797 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96"} err="failed to get container status \"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96\": rpc error: code = NotFound desc = could not find container \"de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96\": container with ID starting with de27cd1e509c53f3eb67898015d601c350f9db86f088a661ba921ecfe505fe96 not found: ID does not exist" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.403825 4985 scope.go:117] "RemoveContainer" containerID="637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8" Jan 27 09:18:01 crc kubenswrapper[4985]: E0127 09:18:01.404245 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8\": container with ID starting with 637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8 not found: ID does not exist" containerID="637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.404305 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8"} err="failed to get container status \"637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8\": rpc error: code = NotFound desc = could not find container \"637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8\": container with ID starting with 637cbe7d0471c9f929e4826f5d21f517103551fc846812a5c28e56b6cf3b6cd8 not found: ID does not exist" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.404337 4985 scope.go:117] "RemoveContainer" containerID="98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3" Jan 27 09:18:01 crc kubenswrapper[4985]: E0127 09:18:01.404729 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3\": container with ID starting with 98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3 not found: ID does not exist" containerID="98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3" Jan 27 09:18:01 crc kubenswrapper[4985]: I0127 09:18:01.404755 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3"} err="failed to get container status \"98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3\": rpc error: code = NotFound desc = could not find container \"98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3\": container with ID starting with 98cd6f310f989d654a4e6feb883d71e3091b11c3d988a5525c13ca4f185167e3 not found: ID does not exist" Jan 27 09:18:02 crc kubenswrapper[4985]: I0127 09:18:02.463283 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" path="/var/lib/kubelet/pods/8596c50f-b932-44d4-adb0-b165f1c6b042/volumes" Jan 27 09:18:11 crc kubenswrapper[4985]: I0127 09:18:11.828790 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:18:11 crc kubenswrapper[4985]: I0127 09:18:11.829357 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:18:11 crc kubenswrapper[4985]: I0127 09:18:11.829404 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:18:11 crc kubenswrapper[4985]: I0127 09:18:11.830215 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:18:11 crc kubenswrapper[4985]: I0127 09:18:11.830271 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129" gracePeriod=600 Jan 27 09:18:12 crc kubenswrapper[4985]: I0127 09:18:12.423418 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129" exitCode=0 Jan 27 09:18:12 crc kubenswrapper[4985]: I0127 09:18:12.423553 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129"} Jan 27 09:18:12 crc kubenswrapper[4985]: I0127 09:18:12.424583 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54"} Jan 27 09:18:12 crc kubenswrapper[4985]: I0127 09:18:12.424625 4985 scope.go:117] "RemoveContainer" containerID="c0c7e1753712389ebd0528734323af45a2441fb966cbcf871cf1260ca96d824f" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.072064 4985 scope.go:117] "RemoveContainer" containerID="abb1d2c7a620858ee2962d43c7b3c3f76d60afdf1e8c844a1749ce7974e81420" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.096173 4985 scope.go:117] "RemoveContainer" containerID="7f1f756bf210c6df792b11aa33954026c0ec1705564e1adb57c28546d39bee61" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.117237 4985 scope.go:117] "RemoveContainer" containerID="49643251f88ce768aa087d6019abea8b00565a7cd22db67f9b8bcaae97610be0" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.322237 4985 scope.go:117] "RemoveContainer" containerID="a4b07c371f68b029087b28689de1f3bbde3f3ec765bf26e132cbf5e38e140b3e" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.340402 4985 scope.go:117] "RemoveContainer" containerID="caee8ea061cd5befe6d69010922f3541488f1c53d09190a33bb801be6d813d5c" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.497329 4985 scope.go:117] "RemoveContainer" containerID="62b784fa6f3f9b9a0c3f1e0f989a293f5b6f100551d74bbef6451f711e94cd2f" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.533401 4985 scope.go:117] "RemoveContainer" containerID="d2f74e4c94aa628260eb17e7557a103b096c26699f38d1831b18054c547023c9" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.583274 4985 scope.go:117] "RemoveContainer" containerID="edbcf5937465700859dc816db60e1e0552e996110cb8e072fffb5f8e7c5f91fd" Jan 27 09:18:42 crc kubenswrapper[4985]: I0127 09:18:42.761592 4985 scope.go:117] "RemoveContainer" containerID="910b9810ea9d46335a74b8ae95a287a497a675c8dfa7c31dbc56cd0fe6a8cca9" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.003603 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:14 crc kubenswrapper[4985]: E0127 09:19:14.005013 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="extract-utilities" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.005035 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="extract-utilities" Jan 27 09:19:14 crc kubenswrapper[4985]: E0127 09:19:14.005068 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="registry-server" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.005077 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="registry-server" Jan 27 09:19:14 crc kubenswrapper[4985]: E0127 09:19:14.005093 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="extract-content" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.005101 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="extract-content" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.005305 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8596c50f-b932-44d4-adb0-b165f1c6b042" containerName="registry-server" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.006717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.040625 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.066061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.066359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs92k\" (UniqueName: \"kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.066448 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.168343 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.168877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.169386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.169556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs92k\" (UniqueName: \"kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.169821 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.192458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs92k\" (UniqueName: \"kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k\") pod \"community-operators-x7hw2\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.337304 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:14 crc kubenswrapper[4985]: I0127 09:19:14.873690 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:15 crc kubenswrapper[4985]: I0127 09:19:15.005958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerStarted","Data":"b806e75745d90e47f3c4d43dac637710f9d884c26584298fda9a59bb88b8759f"} Jan 27 09:19:16 crc kubenswrapper[4985]: I0127 09:19:16.015106 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c5470ad-d59f-4498-9503-e597545201e1" containerID="704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f" exitCode=0 Jan 27 09:19:16 crc kubenswrapper[4985]: I0127 09:19:16.015176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerDied","Data":"704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f"} Jan 27 09:19:19 crc kubenswrapper[4985]: I0127 09:19:19.055684 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c5470ad-d59f-4498-9503-e597545201e1" containerID="5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d" exitCode=0 Jan 27 09:19:19 crc kubenswrapper[4985]: I0127 09:19:19.055735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerDied","Data":"5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d"} Jan 27 09:19:25 crc kubenswrapper[4985]: I0127 09:19:25.117740 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerStarted","Data":"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df"} Jan 27 09:19:25 crc kubenswrapper[4985]: I0127 09:19:25.151917 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7hw2" podStartSLOduration=4.023169059 podStartE2EDuration="12.151892677s" podCreationTimestamp="2026-01-27 09:19:13 +0000 UTC" firstStartedPulling="2026-01-27 09:19:16.018007121 +0000 UTC m=+1540.309101962" lastFinishedPulling="2026-01-27 09:19:24.146730739 +0000 UTC m=+1548.437825580" observedRunningTime="2026-01-27 09:19:25.137596014 +0000 UTC m=+1549.428690865" watchObservedRunningTime="2026-01-27 09:19:25.151892677 +0000 UTC m=+1549.442987518" Jan 27 09:19:34 crc kubenswrapper[4985]: I0127 09:19:34.337541 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:34 crc kubenswrapper[4985]: I0127 09:19:34.338126 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:34 crc kubenswrapper[4985]: I0127 09:19:34.383902 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:35 crc kubenswrapper[4985]: I0127 09:19:35.261409 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:35 crc kubenswrapper[4985]: I0127 09:19:35.320963 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.226023 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7hw2" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="registry-server" containerID="cri-o://ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df" gracePeriod=2 Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.737234 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.852229 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content\") pod \"8c5470ad-d59f-4498-9503-e597545201e1\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.852387 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs92k\" (UniqueName: \"kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k\") pod \"8c5470ad-d59f-4498-9503-e597545201e1\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.852456 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities\") pod \"8c5470ad-d59f-4498-9503-e597545201e1\" (UID: \"8c5470ad-d59f-4498-9503-e597545201e1\") " Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.853504 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities" (OuterVolumeSpecName: "utilities") pod "8c5470ad-d59f-4498-9503-e597545201e1" (UID: "8c5470ad-d59f-4498-9503-e597545201e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.857956 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k" (OuterVolumeSpecName: "kube-api-access-rs92k") pod "8c5470ad-d59f-4498-9503-e597545201e1" (UID: "8c5470ad-d59f-4498-9503-e597545201e1"). InnerVolumeSpecName "kube-api-access-rs92k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.907004 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c5470ad-d59f-4498-9503-e597545201e1" (UID: "8c5470ad-d59f-4498-9503-e597545201e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.954833 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.954874 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs92k\" (UniqueName: \"kubernetes.io/projected/8c5470ad-d59f-4498-9503-e597545201e1-kube-api-access-rs92k\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:37 crc kubenswrapper[4985]: I0127 09:19:37.954891 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5470ad-d59f-4498-9503-e597545201e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.238666 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c5470ad-d59f-4498-9503-e597545201e1" containerID="ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df" exitCode=0 Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.238794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerDied","Data":"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df"} Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.238821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hw2" event={"ID":"8c5470ad-d59f-4498-9503-e597545201e1","Type":"ContainerDied","Data":"b806e75745d90e47f3c4d43dac637710f9d884c26584298fda9a59bb88b8759f"} Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.238837 4985 scope.go:117] "RemoveContainer" containerID="ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.238960 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hw2" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.282647 4985 scope.go:117] "RemoveContainer" containerID="5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.293498 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.304700 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7hw2"] Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.313343 4985 scope.go:117] "RemoveContainer" containerID="704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.368596 4985 scope.go:117] "RemoveContainer" containerID="ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df" Jan 27 09:19:38 crc kubenswrapper[4985]: E0127 09:19:38.369792 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df\": container with ID starting with ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df not found: ID does not exist" containerID="ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.369958 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df"} err="failed to get container status \"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df\": rpc error: code = NotFound desc = could not find container \"ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df\": container with ID starting with ecfec987605b4dcc1c347eb0b2755f950de90bb803e8984267e60c09b0fe44df not found: ID does not exist" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.370106 4985 scope.go:117] "RemoveContainer" containerID="5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d" Jan 27 09:19:38 crc kubenswrapper[4985]: E0127 09:19:38.371214 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d\": container with ID starting with 5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d not found: ID does not exist" containerID="5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.371278 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d"} err="failed to get container status \"5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d\": rpc error: code = NotFound desc = could not find container \"5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d\": container with ID starting with 5994382afe1fa051422e909a5d1f396b2d0f6e2c94bbf8d8ae08038a2c0c5e4d not found: ID does not exist" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.371313 4985 scope.go:117] "RemoveContainer" containerID="704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f" Jan 27 09:19:38 crc kubenswrapper[4985]: E0127 09:19:38.371688 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f\": container with ID starting with 704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f not found: ID does not exist" containerID="704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.371783 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f"} err="failed to get container status \"704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f\": rpc error: code = NotFound desc = could not find container \"704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f\": container with ID starting with 704cef643e4d58ba06f21d83de22fbbc6016d3b3903b6696326654bba686330f not found: ID does not exist" Jan 27 09:19:38 crc kubenswrapper[4985]: I0127 09:19:38.470057 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5470ad-d59f-4498-9503-e597545201e1" path="/var/lib/kubelet/pods/8c5470ad-d59f-4498-9503-e597545201e1/volumes" Jan 27 09:20:41 crc kubenswrapper[4985]: I0127 09:20:41.828746 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:20:41 crc kubenswrapper[4985]: I0127 09:20:41.829279 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:20:42 crc kubenswrapper[4985]: I0127 09:20:42.932332 4985 scope.go:117] "RemoveContainer" containerID="29ecb5f13cbff4464adc52fc4d89a3390ca4788c971de74be30de4377341c7e5" Jan 27 09:20:42 crc kubenswrapper[4985]: I0127 09:20:42.956274 4985 scope.go:117] "RemoveContainer" containerID="c02e9bc8d11a5a3e8d3b660efb03e6edd0a78117da4d213f61d76a06b8090578" Jan 27 09:20:42 crc kubenswrapper[4985]: I0127 09:20:42.979386 4985 scope.go:117] "RemoveContainer" containerID="803ce15e6f511ee278d47a58b7973beb894bcfed820b20202797144ceb5c9002" Jan 27 09:20:43 crc kubenswrapper[4985]: I0127 09:20:43.000089 4985 scope.go:117] "RemoveContainer" containerID="4ef2b824f0ab3d1a98d5e133b32d72f98494e3386ccac764737e531cee42dc38" Jan 27 09:21:11 crc kubenswrapper[4985]: I0127 09:21:11.828941 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:21:11 crc kubenswrapper[4985]: I0127 09:21:11.829751 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:21:17 crc kubenswrapper[4985]: I0127 09:21:17.051228 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-107f-account-create-update-jv8z8"] Jan 27 09:21:17 crc kubenswrapper[4985]: I0127 09:21:17.064989 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-107f-account-create-update-jv8z8"] Jan 27 09:21:17 crc kubenswrapper[4985]: I0127 09:21:17.105885 4985 generic.go:334] "Generic (PLEG): container finished" podID="ca890687-26ad-46f0-9ca5-8c245c6f4b22" containerID="04f7d28713644f0e8b3f31ce2d68c816963541cf0190386ea15a5271baa94bfc" exitCode=0 Jan 27 09:21:17 crc kubenswrapper[4985]: I0127 09:21:17.105948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" event={"ID":"ca890687-26ad-46f0-9ca5-8c245c6f4b22","Type":"ContainerDied","Data":"04f7d28713644f0e8b3f31ce2d68c816963541cf0190386ea15a5271baa94bfc"} Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.036567 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zb58b"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.049048 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-81c0-account-create-update-p7n55"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.063155 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zb58b"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.072989 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b30d-account-create-update-4h6gn"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.081911 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b30d-account-create-update-4h6gn"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.093340 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-81c0-account-create-update-p7n55"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.101979 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mhl9w"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.113538 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7h2cs"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.125973 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mhl9w"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.140440 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7h2cs"] Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.468651 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04009233-f269-4fda-b5ba-4a806e56b4ea" path="/var/lib/kubelet/pods/04009233-f269-4fda-b5ba-4a806e56b4ea/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.470780 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5fea9a-94ec-4d40-a9ce-e8245a49f14e" path="/var/lib/kubelet/pods/0c5fea9a-94ec-4d40-a9ce-e8245a49f14e/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.471431 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1107e86e-6b40-4c4a-94bb-c478cf5954c8" path="/var/lib/kubelet/pods/1107e86e-6b40-4c4a-94bb-c478cf5954c8/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.472786 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744384a9-ac5a-46ef-a549-37046198fecf" path="/var/lib/kubelet/pods/744384a9-ac5a-46ef-a549-37046198fecf/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.474331 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff42621-0c5e-44bb-ba09-a536658065b8" path="/var/lib/kubelet/pods/7ff42621-0c5e-44bb-ba09-a536658065b8/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.474992 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e" path="/var/lib/kubelet/pods/ca4dfbd8-ed4c-403a-a8dd-8cfc8ee54f0e/volumes" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.582343 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.756335 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam\") pod \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.756427 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory\") pod \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.756658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle\") pod \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.757024 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9c8\" (UniqueName: \"kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8\") pod \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\" (UID: \"ca890687-26ad-46f0-9ca5-8c245c6f4b22\") " Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.765593 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8" (OuterVolumeSpecName: "kube-api-access-nv9c8") pod "ca890687-26ad-46f0-9ca5-8c245c6f4b22" (UID: "ca890687-26ad-46f0-9ca5-8c245c6f4b22"). InnerVolumeSpecName "kube-api-access-nv9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.770554 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ca890687-26ad-46f0-9ca5-8c245c6f4b22" (UID: "ca890687-26ad-46f0-9ca5-8c245c6f4b22"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.792918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory" (OuterVolumeSpecName: "inventory") pod "ca890687-26ad-46f0-9ca5-8c245c6f4b22" (UID: "ca890687-26ad-46f0-9ca5-8c245c6f4b22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.804052 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca890687-26ad-46f0-9ca5-8c245c6f4b22" (UID: "ca890687-26ad-46f0-9ca5-8c245c6f4b22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.861187 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.861284 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.861298 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca890687-26ad-46f0-9ca5-8c245c6f4b22-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:18 crc kubenswrapper[4985]: I0127 09:21:18.861310 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9c8\" (UniqueName: \"kubernetes.io/projected/ca890687-26ad-46f0-9ca5-8c245c6f4b22-kube-api-access-nv9c8\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.130615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" event={"ID":"ca890687-26ad-46f0-9ca5-8c245c6f4b22","Type":"ContainerDied","Data":"c615ac8ec308ea3536bbfc562d8388eca599a649f2f0bf78eb7776d4f25344c1"} Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.130999 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c615ac8ec308ea3536bbfc562d8388eca599a649f2f0bf78eb7776d4f25344c1" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.130666 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.235364 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6"] Jan 27 09:21:19 crc kubenswrapper[4985]: E0127 09:21:19.235863 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="extract-utilities" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.235887 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="extract-utilities" Jan 27 09:21:19 crc kubenswrapper[4985]: E0127 09:21:19.235916 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca890687-26ad-46f0-9ca5-8c245c6f4b22" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.235926 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca890687-26ad-46f0-9ca5-8c245c6f4b22" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 09:21:19 crc kubenswrapper[4985]: E0127 09:21:19.235950 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="registry-server" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.235956 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="registry-server" Jan 27 09:21:19 crc kubenswrapper[4985]: E0127 09:21:19.235969 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="extract-content" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.235976 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="extract-content" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.236187 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca890687-26ad-46f0-9ca5-8c245c6f4b22" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.236221 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5470ad-d59f-4498-9503-e597545201e1" containerName="registry-server" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.236968 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.240137 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.240291 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.240359 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.240731 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.309202 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6"] Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.374654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.374718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7tv5\" (UniqueName: \"kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.374911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.476964 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.477154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.477199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7tv5\" (UniqueName: \"kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.485643 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.486169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.501425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7tv5\" (UniqueName: \"kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:19 crc kubenswrapper[4985]: I0127 09:21:19.566956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:21:20 crc kubenswrapper[4985]: I0127 09:21:20.241722 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6"] Jan 27 09:21:20 crc kubenswrapper[4985]: W0127 09:21:20.244527 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4781598_eb66_49c1_80c2_cf509881f0dd.slice/crio-905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791 WatchSource:0}: Error finding container 905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791: Status 404 returned error can't find the container with id 905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791 Jan 27 09:21:20 crc kubenswrapper[4985]: I0127 09:21:20.248088 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:21:21 crc kubenswrapper[4985]: I0127 09:21:21.193330 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" event={"ID":"a4781598-eb66-49c1-80c2-cf509881f0dd","Type":"ContainerStarted","Data":"905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791"} Jan 27 09:21:22 crc kubenswrapper[4985]: I0127 09:21:22.209049 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" event={"ID":"a4781598-eb66-49c1-80c2-cf509881f0dd","Type":"ContainerStarted","Data":"011fa3904b2b291e8fa3be184f24283c260a8ef85be6cc4ed39381f031de9bdb"} Jan 27 09:21:40 crc kubenswrapper[4985]: I0127 09:21:40.046938 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" podStartSLOduration=20.361671754 podStartE2EDuration="21.04690952s" podCreationTimestamp="2026-01-27 09:21:19 +0000 UTC" firstStartedPulling="2026-01-27 09:21:20.247762966 +0000 UTC m=+1664.538857807" lastFinishedPulling="2026-01-27 09:21:20.933000742 +0000 UTC m=+1665.224095573" observedRunningTime="2026-01-27 09:21:22.26205369 +0000 UTC m=+1666.553148531" watchObservedRunningTime="2026-01-27 09:21:40.04690952 +0000 UTC m=+1684.338004361" Jan 27 09:21:40 crc kubenswrapper[4985]: I0127 09:21:40.052643 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ml6s7"] Jan 27 09:21:40 crc kubenswrapper[4985]: I0127 09:21:40.065264 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ml6s7"] Jan 27 09:21:40 crc kubenswrapper[4985]: I0127 09:21:40.468845 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f893c0-3ad5-46a5-b005-bcbdd13e7b09" path="/var/lib/kubelet/pods/07f893c0-3ad5-46a5-b005-bcbdd13e7b09/volumes" Jan 27 09:21:41 crc kubenswrapper[4985]: I0127 09:21:41.828249 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:21:41 crc kubenswrapper[4985]: I0127 09:21:41.828301 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:21:41 crc kubenswrapper[4985]: I0127 09:21:41.828340 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:21:41 crc kubenswrapper[4985]: I0127 09:21:41.829085 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:21:41 crc kubenswrapper[4985]: I0127 09:21:41.829144 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" gracePeriod=600 Jan 27 09:21:41 crc kubenswrapper[4985]: E0127 09:21:41.991349 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.054205 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d487-account-create-update-hppt6"] Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.067841 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d487-account-create-update-hppt6"] Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.419702 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" exitCode=0 Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.419780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54"} Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.420096 4985 scope.go:117] "RemoveContainer" containerID="d055d7fe9763dfc4b99c0db32ce38e86fad249d2d222ca9eacd889ec0193a129" Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.421082 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:21:42 crc kubenswrapper[4985]: E0127 09:21:42.422576 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:21:42 crc kubenswrapper[4985]: I0127 09:21:42.463574 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3166563-f008-4d79-911a-55c399e8d65d" path="/var/lib/kubelet/pods/e3166563-f008-4d79-911a-55c399e8d65d/volumes" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.042959 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-m6p82"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.045919 4985 scope.go:117] "RemoveContainer" containerID="6c600e08e3a754feac595e009de9a80fa06eb08943cd147f17ba979e374cee78" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.053870 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w6tww"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.067653 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1f54-account-create-update-pdcwv"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.078169 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-m6p82"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.084232 4985 scope.go:117] "RemoveContainer" containerID="92250c43949659c35f14c82e2ce432bd37907b95b57df4adb4e9f523eb4434dc" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.088041 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w6tww"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.101093 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rxmmb"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.119632 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1f54-account-create-update-pdcwv"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.126015 4985 scope.go:117] "RemoveContainer" containerID="bb6eb1b31ffe8ac30fa75e4e6e13ad5b27db9a8b70fbaad36cdbfa5f355fa90a" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.129749 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rxmmb"] Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.170576 4985 scope.go:117] "RemoveContainer" containerID="54d6cb23c01b8d44de0f3b97d42f8b10f6d796a3c4be553ec1bb39ca8b3d6422" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.218256 4985 scope.go:117] "RemoveContainer" containerID="e5f2fa483b21fcf5e4d3910af59a3e8b975ce375b10a09febf7d1bf1aa3f9cf3" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.277728 4985 scope.go:117] "RemoveContainer" containerID="f91828d746db2c81d42da01dd12cd5522be6c09307536f3aac8e6a6611a8cf2a" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.337390 4985 scope.go:117] "RemoveContainer" containerID="5319b93bd008daec3fbffb1765044db6ef4d43645832ec02d8a40f0808e89dd3" Jan 27 09:21:43 crc kubenswrapper[4985]: I0127 09:21:43.356844 4985 scope.go:117] "RemoveContainer" containerID="d10a79d1d9448588e3b9cb2612006bb8f2575ab3b0feff0ae8b09006dd5e1695" Jan 27 09:21:44 crc kubenswrapper[4985]: I0127 09:21:44.466280 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5246fdaf-1bb4-454a-bf60-b0372f3ae653" path="/var/lib/kubelet/pods/5246fdaf-1bb4-454a-bf60-b0372f3ae653/volumes" Jan 27 09:21:44 crc kubenswrapper[4985]: I0127 09:21:44.468223 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c783dc-2abc-4a6a-99d0-d56e5826898f" path="/var/lib/kubelet/pods/b9c783dc-2abc-4a6a-99d0-d56e5826898f/volumes" Jan 27 09:21:44 crc kubenswrapper[4985]: I0127 09:21:44.468945 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07ec733-232f-43bf-868d-d0a25592faec" path="/var/lib/kubelet/pods/c07ec733-232f-43bf-868d-d0a25592faec/volumes" Jan 27 09:21:44 crc kubenswrapper[4985]: I0127 09:21:44.469763 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa5488d-50d7-4f03-8187-83db05387838" path="/var/lib/kubelet/pods/dfa5488d-50d7-4f03-8187-83db05387838/volumes" Jan 27 09:21:47 crc kubenswrapper[4985]: I0127 09:21:47.027043 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8473-account-create-update-p98cs"] Jan 27 09:21:47 crc kubenswrapper[4985]: I0127 09:21:47.037323 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8473-account-create-update-p98cs"] Jan 27 09:21:48 crc kubenswrapper[4985]: I0127 09:21:48.033909 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tsdq2"] Jan 27 09:21:48 crc kubenswrapper[4985]: I0127 09:21:48.042425 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tsdq2"] Jan 27 09:21:48 crc kubenswrapper[4985]: I0127 09:21:48.461945 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0162801-70eb-4094-b33d-3063eb978eef" path="/var/lib/kubelet/pods/d0162801-70eb-4094-b33d-3063eb978eef/volumes" Jan 27 09:21:48 crc kubenswrapper[4985]: I0127 09:21:48.462819 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb511c2d-40ad-47b6-a515-1101d1ff3b5f" path="/var/lib/kubelet/pods/eb511c2d-40ad-47b6-a515-1101d1ff3b5f/volumes" Jan 27 09:21:51 crc kubenswrapper[4985]: I0127 09:21:51.034590 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jw5fd"] Jan 27 09:21:51 crc kubenswrapper[4985]: I0127 09:21:51.042092 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jw5fd"] Jan 27 09:21:52 crc kubenswrapper[4985]: I0127 09:21:52.461690 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933bd11f-35db-489e-aacf-b2ba95de3154" path="/var/lib/kubelet/pods/933bd11f-35db-489e-aacf-b2ba95de3154/volumes" Jan 27 09:21:56 crc kubenswrapper[4985]: I0127 09:21:56.459320 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:21:56 crc kubenswrapper[4985]: E0127 09:21:56.460492 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:22:07 crc kubenswrapper[4985]: I0127 09:22:07.452928 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:22:07 crc kubenswrapper[4985]: E0127 09:22:07.453777 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:22:22 crc kubenswrapper[4985]: I0127 09:22:22.452564 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:22:22 crc kubenswrapper[4985]: E0127 09:22:22.453260 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:22:25 crc kubenswrapper[4985]: I0127 09:22:25.045278 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ftb9l"] Jan 27 09:22:25 crc kubenswrapper[4985]: I0127 09:22:25.052227 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ftb9l"] Jan 27 09:22:26 crc kubenswrapper[4985]: I0127 09:22:26.465056 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f" path="/var/lib/kubelet/pods/82de7da8-bbcb-4fbd-b942-23cbe2e1bd8f/volumes" Jan 27 09:22:36 crc kubenswrapper[4985]: I0127 09:22:36.458102 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:22:36 crc kubenswrapper[4985]: E0127 09:22:36.459027 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:22:41 crc kubenswrapper[4985]: I0127 09:22:41.032304 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5tmw8"] Jan 27 09:22:41 crc kubenswrapper[4985]: I0127 09:22:41.041487 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pj7x6"] Jan 27 09:22:41 crc kubenswrapper[4985]: I0127 09:22:41.050109 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5tmw8"] Jan 27 09:22:41 crc kubenswrapper[4985]: I0127 09:22:41.058200 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pj7x6"] Jan 27 09:22:42 crc kubenswrapper[4985]: I0127 09:22:42.463268 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a714c1-196c-4f83-b457-83847e9e97a6" path="/var/lib/kubelet/pods/42a714c1-196c-4f83-b457-83847e9e97a6/volumes" Jan 27 09:22:42 crc kubenswrapper[4985]: I0127 09:22:42.464108 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2de0653-57ca-4d6b-a8a7-10b39b9c4678" path="/var/lib/kubelet/pods/c2de0653-57ca-4d6b-a8a7-10b39b9c4678/volumes" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.532364 4985 scope.go:117] "RemoveContainer" containerID="7e492508355c2f0f68e2714d3c5cb6ffb387b30da8e92d1aeeae397c3c432322" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.554776 4985 scope.go:117] "RemoveContainer" containerID="5280aa4a3f1b2967de9440e3a75640468d2c10f53dbb874651e283cf10342c31" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.607180 4985 scope.go:117] "RemoveContainer" containerID="0885d30dadcc106099923fc0987e04782af9e54baa246194b717ec0e98ac9ba7" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.673371 4985 scope.go:117] "RemoveContainer" containerID="191a820ded102b1df0ce5041493282d9c44a28b8671275e904fb00a92138d524" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.746381 4985 scope.go:117] "RemoveContainer" containerID="f2497146faac80913a880fec95fff751d9cce625b3fa3b1128ca4cefe1e6d6a8" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.794757 4985 scope.go:117] "RemoveContainer" containerID="7e3000bd0d621e6f2521fa44a6c2e25b1bed217a4934b50a0ab211296390fece" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.835435 4985 scope.go:117] "RemoveContainer" containerID="c1b73e907b51abb61a6365bb340ea6d60923dbba468110ea419cce35020a2db2" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.867971 4985 scope.go:117] "RemoveContainer" containerID="c00d7d5aac014e820537aa9bc9904b206c40b5be7548a77332fef5cdaf859bf2" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.887389 4985 scope.go:117] "RemoveContainer" containerID="33adb145e1f1cd11f11fd29864020c69d36bc1c528c8d6d757292a858a0831c3" Jan 27 09:22:43 crc kubenswrapper[4985]: I0127 09:22:43.906955 4985 scope.go:117] "RemoveContainer" containerID="ca82b3a94a90753b3fd98871bab16a13a0b22f4bb5accb8ff68c96678e356ec4" Jan 27 09:22:49 crc kubenswrapper[4985]: I0127 09:22:49.452261 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:22:49 crc kubenswrapper[4985]: E0127 09:22:49.453143 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.044597 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-95tb6"] Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.055598 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pj5kt"] Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.071586 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-95tb6"] Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.085946 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pj5kt"] Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.464683 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31214ba8-5f89-4b54-9293-b6cd43c8cbe5" path="/var/lib/kubelet/pods/31214ba8-5f89-4b54-9293-b6cd43c8cbe5/volumes" Jan 27 09:22:50 crc kubenswrapper[4985]: I0127 09:22:50.465354 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c4f8a3-0f30-4724-84bd-952a5d5170cb" path="/var/lib/kubelet/pods/a9c4f8a3-0f30-4724-84bd-952a5d5170cb/volumes" Jan 27 09:22:51 crc kubenswrapper[4985]: I0127 09:22:51.042028 4985 generic.go:334] "Generic (PLEG): container finished" podID="a4781598-eb66-49c1-80c2-cf509881f0dd" containerID="011fa3904b2b291e8fa3be184f24283c260a8ef85be6cc4ed39381f031de9bdb" exitCode=0 Jan 27 09:22:51 crc kubenswrapper[4985]: I0127 09:22:51.042118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" event={"ID":"a4781598-eb66-49c1-80c2-cf509881f0dd","Type":"ContainerDied","Data":"011fa3904b2b291e8fa3be184f24283c260a8ef85be6cc4ed39381f031de9bdb"} Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.449037 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.493325 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam\") pod \"a4781598-eb66-49c1-80c2-cf509881f0dd\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.493909 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory\") pod \"a4781598-eb66-49c1-80c2-cf509881f0dd\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.494425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7tv5\" (UniqueName: \"kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5\") pod \"a4781598-eb66-49c1-80c2-cf509881f0dd\" (UID: \"a4781598-eb66-49c1-80c2-cf509881f0dd\") " Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.503104 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5" (OuterVolumeSpecName: "kube-api-access-f7tv5") pod "a4781598-eb66-49c1-80c2-cf509881f0dd" (UID: "a4781598-eb66-49c1-80c2-cf509881f0dd"). InnerVolumeSpecName "kube-api-access-f7tv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.525117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4781598-eb66-49c1-80c2-cf509881f0dd" (UID: "a4781598-eb66-49c1-80c2-cf509881f0dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.531870 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory" (OuterVolumeSpecName: "inventory") pod "a4781598-eb66-49c1-80c2-cf509881f0dd" (UID: "a4781598-eb66-49c1-80c2-cf509881f0dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.596246 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.596296 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4781598-eb66-49c1-80c2-cf509881f0dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:52 crc kubenswrapper[4985]: I0127 09:22:52.596309 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7tv5\" (UniqueName: \"kubernetes.io/projected/a4781598-eb66-49c1-80c2-cf509881f0dd-kube-api-access-f7tv5\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.064258 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" event={"ID":"a4781598-eb66-49c1-80c2-cf509881f0dd","Type":"ContainerDied","Data":"905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791"} Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.064318 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.064338 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905b2a5ac41c9a04d20f7099e4538c056995d2d15f1fd5083d0cb618b3274791" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.151302 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9"] Jan 27 09:22:53 crc kubenswrapper[4985]: E0127 09:22:53.151744 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4781598-eb66-49c1-80c2-cf509881f0dd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.151771 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4781598-eb66-49c1-80c2-cf509881f0dd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.152002 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4781598-eb66-49c1-80c2-cf509881f0dd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.152731 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.155159 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.155933 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.156116 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.156286 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.178846 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9"] Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.208564 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.208648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgfz\" (UniqueName: \"kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.208730 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.311531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.311610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksgfz\" (UniqueName: \"kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.311675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.317807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.321262 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.331133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksgfz\" (UniqueName: \"kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.477202 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:22:53 crc kubenswrapper[4985]: I0127 09:22:53.998419 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9"] Jan 27 09:22:54 crc kubenswrapper[4985]: I0127 09:22:54.073482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" event={"ID":"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763","Type":"ContainerStarted","Data":"74ff845407cafb373b64ef9b6a473fc60e0f76f8e1f32294db83cae2948ffaba"} Jan 27 09:22:56 crc kubenswrapper[4985]: I0127 09:22:56.092724 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" event={"ID":"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763","Type":"ContainerStarted","Data":"e11c682ee4eb0e5d47aaf89519f3588ed159a32002ba7056aa8b4ed46174e664"} Jan 27 09:22:56 crc kubenswrapper[4985]: I0127 09:22:56.116980 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" podStartSLOduration=1.7914775550000002 podStartE2EDuration="3.11695712s" podCreationTimestamp="2026-01-27 09:22:53 +0000 UTC" firstStartedPulling="2026-01-27 09:22:54.01064915 +0000 UTC m=+1758.301743991" lastFinishedPulling="2026-01-27 09:22:55.336128725 +0000 UTC m=+1759.627223556" observedRunningTime="2026-01-27 09:22:56.115944983 +0000 UTC m=+1760.407039824" watchObservedRunningTime="2026-01-27 09:22:56.11695712 +0000 UTC m=+1760.408051961" Jan 27 09:23:01 crc kubenswrapper[4985]: I0127 09:23:01.452600 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:23:01 crc kubenswrapper[4985]: E0127 09:23:01.453333 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:23:16 crc kubenswrapper[4985]: I0127 09:23:16.458046 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:23:16 crc kubenswrapper[4985]: E0127 09:23:16.459183 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:23:27 crc kubenswrapper[4985]: I0127 09:23:27.491204 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:23:27 crc kubenswrapper[4985]: E0127 09:23:27.513423 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.054320 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7cxvr"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.065097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9c44-account-create-update-hwhwd"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.076838 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-l8v4l"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.087339 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7cxvr"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.097607 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-l8v4l"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.106947 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xqrpv"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.116875 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9c44-account-create-update-hwhwd"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.126933 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xqrpv"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.136394 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1835-account-create-update-m5p5l"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.143479 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1835-account-create-update-m5p5l"] Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.463391 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c906eea-e955-4881-8244-4f4cd7b84bf0" path="/var/lib/kubelet/pods/1c906eea-e955-4881-8244-4f4cd7b84bf0/volumes" Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.464400 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de7d9fd-8ea7-4a62-8325-627343d4c2b3" path="/var/lib/kubelet/pods/1de7d9fd-8ea7-4a62-8325-627343d4c2b3/volumes" Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.465020 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4581f336-5495-48a5-b6a0-d35ea0818a50" path="/var/lib/kubelet/pods/4581f336-5495-48a5-b6a0-d35ea0818a50/volumes" Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.465655 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e0a278-f0bb-4b42-8c22-39b1b25c85af" path="/var/lib/kubelet/pods/49e0a278-f0bb-4b42-8c22-39b1b25c85af/volumes" Jan 27 09:23:34 crc kubenswrapper[4985]: I0127 09:23:34.466769 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc13ae9-0535-44cc-832d-4c22f662cfc7" path="/var/lib/kubelet/pods/fdc13ae9-0535-44cc-832d-4c22f662cfc7/volumes" Jan 27 09:23:35 crc kubenswrapper[4985]: I0127 09:23:35.032183 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ce6a-account-create-update-qc72j"] Jan 27 09:23:35 crc kubenswrapper[4985]: I0127 09:23:35.045826 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ce6a-account-create-update-qc72j"] Jan 27 09:23:36 crc kubenswrapper[4985]: I0127 09:23:36.463123 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a211599-7c44-4939-8141-69dda5389ca7" path="/var/lib/kubelet/pods/4a211599-7c44-4939-8141-69dda5389ca7/volumes" Jan 27 09:23:41 crc kubenswrapper[4985]: I0127 09:23:41.452764 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:23:41 crc kubenswrapper[4985]: E0127 09:23:41.453592 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.139669 4985 scope.go:117] "RemoveContainer" containerID="0d04c042e8fa4aae2d76b5c1dd9b9d55ee7af630555b0b6493eae9be6ed9e06c" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.166992 4985 scope.go:117] "RemoveContainer" containerID="e18ff5d92f1832b65df632606823de68214ed87d027d18c73966c96b2003c09d" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.218622 4985 scope.go:117] "RemoveContainer" containerID="2a3a7f684a4115fb53cddc0b056e8f2eec55b8fbe1c69e63c37205eccf7dab22" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.266078 4985 scope.go:117] "RemoveContainer" containerID="4cbb89a38bd20366f4e981e913b464ff23b597b1016961d22baad2c4ccafe017" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.309748 4985 scope.go:117] "RemoveContainer" containerID="784f17d774a1d9932ee223f3275e0b98c53f858c35331b17072ca0bb01b4e3b4" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.345246 4985 scope.go:117] "RemoveContainer" containerID="a5b8da9105508d5deab6637ff6ee8adae95123f02f44ae17c646964f2c0e1f56" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.410587 4985 scope.go:117] "RemoveContainer" containerID="6bf0794b2ef66d7a7d7f1e712c2278965c5f1b318acdc9690e17b20ede9203ae" Jan 27 09:23:44 crc kubenswrapper[4985]: I0127 09:23:44.435453 4985 scope.go:117] "RemoveContainer" containerID="18a9911333a575795b189bfb6d05e833f4a3e28314c0f95d0bd01906bc1e8887" Jan 27 09:23:55 crc kubenswrapper[4985]: I0127 09:23:55.453269 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:23:55 crc kubenswrapper[4985]: E0127 09:23:55.454133 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:24:04 crc kubenswrapper[4985]: I0127 09:24:04.686610 4985 generic.go:334] "Generic (PLEG): container finished" podID="47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" containerID="e11c682ee4eb0e5d47aaf89519f3588ed159a32002ba7056aa8b4ed46174e664" exitCode=0 Jan 27 09:24:04 crc kubenswrapper[4985]: I0127 09:24:04.686695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" event={"ID":"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763","Type":"ContainerDied","Data":"e11c682ee4eb0e5d47aaf89519f3588ed159a32002ba7056aa8b4ed46174e664"} Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.162847 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.229169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksgfz\" (UniqueName: \"kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz\") pod \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.229266 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam\") pod \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.229637 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory\") pod \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\" (UID: \"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763\") " Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.237979 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz" (OuterVolumeSpecName: "kube-api-access-ksgfz") pod "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" (UID: "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763"). InnerVolumeSpecName "kube-api-access-ksgfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.276283 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory" (OuterVolumeSpecName: "inventory") pod "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" (UID: "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.278690 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" (UID: "47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.333291 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.333362 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksgfz\" (UniqueName: \"kubernetes.io/projected/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-kube-api-access-ksgfz\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.333382 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.713328 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" event={"ID":"47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763","Type":"ContainerDied","Data":"74ff845407cafb373b64ef9b6a473fc60e0f76f8e1f32294db83cae2948ffaba"} Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.713841 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ff845407cafb373b64ef9b6a473fc60e0f76f8e1f32294db83cae2948ffaba" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.713893 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.821119 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm"] Jan 27 09:24:06 crc kubenswrapper[4985]: E0127 09:24:06.821850 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.821867 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.822041 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.822744 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.825925 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.826129 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.826172 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.826231 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.848339 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm"] Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.948806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.948974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:06 crc kubenswrapper[4985]: I0127 09:24:06.949020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnsdg\" (UniqueName: \"kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.050439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.050548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.050591 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnsdg\" (UniqueName: \"kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.057287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.059350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.072121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnsdg\" (UniqueName: \"kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.144381 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.500328 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm"] Jan 27 09:24:07 crc kubenswrapper[4985]: I0127 09:24:07.725703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" event={"ID":"aec3b5ab-cd2a-4a78-a971-3b7624c42450","Type":"ContainerStarted","Data":"25219bb03f01c70c2e78d01d35fbeb6ecc984b8c2a23ef3344a927f022a2db50"} Jan 27 09:24:08 crc kubenswrapper[4985]: I0127 09:24:08.452492 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:24:08 crc kubenswrapper[4985]: E0127 09:24:08.453590 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:24:08 crc kubenswrapper[4985]: I0127 09:24:08.741220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" event={"ID":"aec3b5ab-cd2a-4a78-a971-3b7624c42450","Type":"ContainerStarted","Data":"cb8483c6b44299d070992b35c186a94fb1286f357da21eb5ba8611d5e9afeeba"} Jan 27 09:24:08 crc kubenswrapper[4985]: I0127 09:24:08.772827 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" podStartSLOduration=2.291145457 podStartE2EDuration="2.772809054s" podCreationTimestamp="2026-01-27 09:24:06 +0000 UTC" firstStartedPulling="2026-01-27 09:24:07.519828882 +0000 UTC m=+1831.810923723" lastFinishedPulling="2026-01-27 09:24:08.001492459 +0000 UTC m=+1832.292587320" observedRunningTime="2026-01-27 09:24:08.770323565 +0000 UTC m=+1833.061418426" watchObservedRunningTime="2026-01-27 09:24:08.772809054 +0000 UTC m=+1833.063903895" Jan 27 09:24:11 crc kubenswrapper[4985]: I0127 09:24:11.050567 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dbnnn"] Jan 27 09:24:11 crc kubenswrapper[4985]: I0127 09:24:11.059500 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dbnnn"] Jan 27 09:24:12 crc kubenswrapper[4985]: I0127 09:24:12.463440 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488cf0d5-caf5-4a7c-966c-233b758c0dcd" path="/var/lib/kubelet/pods/488cf0d5-caf5-4a7c-966c-233b758c0dcd/volumes" Jan 27 09:24:13 crc kubenswrapper[4985]: I0127 09:24:13.787640 4985 generic.go:334] "Generic (PLEG): container finished" podID="aec3b5ab-cd2a-4a78-a971-3b7624c42450" containerID="cb8483c6b44299d070992b35c186a94fb1286f357da21eb5ba8611d5e9afeeba" exitCode=0 Jan 27 09:24:13 crc kubenswrapper[4985]: I0127 09:24:13.787724 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" event={"ID":"aec3b5ab-cd2a-4a78-a971-3b7624c42450","Type":"ContainerDied","Data":"cb8483c6b44299d070992b35c186a94fb1286f357da21eb5ba8611d5e9afeeba"} Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.244221 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.326090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnsdg\" (UniqueName: \"kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg\") pod \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.326157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam\") pod \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.326337 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory\") pod \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\" (UID: \"aec3b5ab-cd2a-4a78-a971-3b7624c42450\") " Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.333833 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg" (OuterVolumeSpecName: "kube-api-access-hnsdg") pod "aec3b5ab-cd2a-4a78-a971-3b7624c42450" (UID: "aec3b5ab-cd2a-4a78-a971-3b7624c42450"). InnerVolumeSpecName "kube-api-access-hnsdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.354920 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aec3b5ab-cd2a-4a78-a971-3b7624c42450" (UID: "aec3b5ab-cd2a-4a78-a971-3b7624c42450"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.356943 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory" (OuterVolumeSpecName: "inventory") pod "aec3b5ab-cd2a-4a78-a971-3b7624c42450" (UID: "aec3b5ab-cd2a-4a78-a971-3b7624c42450"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.430094 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnsdg\" (UniqueName: \"kubernetes.io/projected/aec3b5ab-cd2a-4a78-a971-3b7624c42450-kube-api-access-hnsdg\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.430178 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.430198 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec3b5ab-cd2a-4a78-a971-3b7624c42450-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.814172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" event={"ID":"aec3b5ab-cd2a-4a78-a971-3b7624c42450","Type":"ContainerDied","Data":"25219bb03f01c70c2e78d01d35fbeb6ecc984b8c2a23ef3344a927f022a2db50"} Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.814215 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25219bb03f01c70c2e78d01d35fbeb6ecc984b8c2a23ef3344a927f022a2db50" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.814260 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.882350 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q"] Jan 27 09:24:15 crc kubenswrapper[4985]: E0127 09:24:15.883752 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec3b5ab-cd2a-4a78-a971-3b7624c42450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.883774 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec3b5ab-cd2a-4a78-a971-3b7624c42450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.884130 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec3b5ab-cd2a-4a78-a971-3b7624c42450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.885256 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.890322 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.890443 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.890641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.890766 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:24:15 crc kubenswrapper[4985]: I0127 09:24:15.898637 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q"] Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.042947 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.043086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.043119 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmdc\" (UniqueName: \"kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.145829 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.146028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.146066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmdc\" (UniqueName: \"kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.152248 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.156269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.162794 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmdc\" (UniqueName: \"kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s824q\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.204797 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.727398 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q"] Jan 27 09:24:16 crc kubenswrapper[4985]: I0127 09:24:16.823153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" event={"ID":"184363e4-db1e-463c-bb4e-aea7cd0c849d","Type":"ContainerStarted","Data":"18cb500551059f8ce36aac761e7c695eeb4f9a2254f8094da6c45a7553a4178c"} Jan 27 09:24:17 crc kubenswrapper[4985]: I0127 09:24:17.831702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" event={"ID":"184363e4-db1e-463c-bb4e-aea7cd0c849d","Type":"ContainerStarted","Data":"00f3f02d34718b653c68c95c096f63fb0060e824dcf002f8005f601770c992c3"} Jan 27 09:24:17 crc kubenswrapper[4985]: I0127 09:24:17.855259 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" podStartSLOduration=2.436003441 podStartE2EDuration="2.855234601s" podCreationTimestamp="2026-01-27 09:24:15 +0000 UTC" firstStartedPulling="2026-01-27 09:24:16.737212761 +0000 UTC m=+1841.028307602" lastFinishedPulling="2026-01-27 09:24:17.156443921 +0000 UTC m=+1841.447538762" observedRunningTime="2026-01-27 09:24:17.846326176 +0000 UTC m=+1842.137421017" watchObservedRunningTime="2026-01-27 09:24:17.855234601 +0000 UTC m=+1842.146329442" Jan 27 09:24:21 crc kubenswrapper[4985]: I0127 09:24:21.451973 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:24:21 crc kubenswrapper[4985]: E0127 09:24:21.453128 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:24:33 crc kubenswrapper[4985]: I0127 09:24:33.049102 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhg5m"] Jan 27 09:24:33 crc kubenswrapper[4985]: I0127 09:24:33.057875 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhg5m"] Jan 27 09:24:34 crc kubenswrapper[4985]: I0127 09:24:34.035939 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h542g"] Jan 27 09:24:34 crc kubenswrapper[4985]: I0127 09:24:34.043418 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h542g"] Jan 27 09:24:34 crc kubenswrapper[4985]: I0127 09:24:34.462699 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b772fa-eb86-4e1a-8f59-bc3c1748ec07" path="/var/lib/kubelet/pods/87b772fa-eb86-4e1a-8f59-bc3c1748ec07/volumes" Jan 27 09:24:34 crc kubenswrapper[4985]: I0127 09:24:34.463573 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f690b134-393f-40a7-b254-7b95dc81afcf" path="/var/lib/kubelet/pods/f690b134-393f-40a7-b254-7b95dc81afcf/volumes" Jan 27 09:24:36 crc kubenswrapper[4985]: I0127 09:24:36.459550 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:24:36 crc kubenswrapper[4985]: E0127 09:24:36.459854 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:24:44 crc kubenswrapper[4985]: I0127 09:24:44.597732 4985 scope.go:117] "RemoveContainer" containerID="99ea2c355c891717787ae12c5ef6ebffb92611dc1d13cfef5670b261a564babd" Jan 27 09:24:44 crc kubenswrapper[4985]: I0127 09:24:44.649375 4985 scope.go:117] "RemoveContainer" containerID="799387dc31912b7f540637f30187a0048a0b6cc03a987f98cded780ca2ce066c" Jan 27 09:24:44 crc kubenswrapper[4985]: I0127 09:24:44.720095 4985 scope.go:117] "RemoveContainer" containerID="6a7573126217ca48110120fde894f2b6ad2ea692912d430e18f51c77e5a99b04" Jan 27 09:24:51 crc kubenswrapper[4985]: I0127 09:24:51.453094 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:24:51 crc kubenswrapper[4985]: E0127 09:24:51.454437 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:24:55 crc kubenswrapper[4985]: I0127 09:24:55.183414 4985 generic.go:334] "Generic (PLEG): container finished" podID="184363e4-db1e-463c-bb4e-aea7cd0c849d" containerID="00f3f02d34718b653c68c95c096f63fb0060e824dcf002f8005f601770c992c3" exitCode=0 Jan 27 09:24:55 crc kubenswrapper[4985]: I0127 09:24:55.183492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" event={"ID":"184363e4-db1e-463c-bb4e-aea7cd0c849d","Type":"ContainerDied","Data":"00f3f02d34718b653c68c95c096f63fb0060e824dcf002f8005f601770c992c3"} Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.679580 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.870620 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam\") pod \"184363e4-db1e-463c-bb4e-aea7cd0c849d\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.870782 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory\") pod \"184363e4-db1e-463c-bb4e-aea7cd0c849d\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.871262 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmdc\" (UniqueName: \"kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc\") pod \"184363e4-db1e-463c-bb4e-aea7cd0c849d\" (UID: \"184363e4-db1e-463c-bb4e-aea7cd0c849d\") " Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.892019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc" (OuterVolumeSpecName: "kube-api-access-mnmdc") pod "184363e4-db1e-463c-bb4e-aea7cd0c849d" (UID: "184363e4-db1e-463c-bb4e-aea7cd0c849d"). InnerVolumeSpecName "kube-api-access-mnmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.912334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "184363e4-db1e-463c-bb4e-aea7cd0c849d" (UID: "184363e4-db1e-463c-bb4e-aea7cd0c849d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.912995 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory" (OuterVolumeSpecName: "inventory") pod "184363e4-db1e-463c-bb4e-aea7cd0c849d" (UID: "184363e4-db1e-463c-bb4e-aea7cd0c849d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.973301 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmdc\" (UniqueName: \"kubernetes.io/projected/184363e4-db1e-463c-bb4e-aea7cd0c849d-kube-api-access-mnmdc\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.973332 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:56 crc kubenswrapper[4985]: I0127 09:24:56.973345 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/184363e4-db1e-463c-bb4e-aea7cd0c849d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.203808 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" event={"ID":"184363e4-db1e-463c-bb4e-aea7cd0c849d","Type":"ContainerDied","Data":"18cb500551059f8ce36aac761e7c695eeb4f9a2254f8094da6c45a7553a4178c"} Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.204161 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18cb500551059f8ce36aac761e7c695eeb4f9a2254f8094da6c45a7553a4178c" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.203884 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s824q" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.296888 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8"] Jan 27 09:24:57 crc kubenswrapper[4985]: E0127 09:24:57.297417 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184363e4-db1e-463c-bb4e-aea7cd0c849d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.297438 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="184363e4-db1e-463c-bb4e-aea7cd0c849d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.297642 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="184363e4-db1e-463c-bb4e-aea7cd0c849d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.298306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.300728 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.300902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.301623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.302230 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.308773 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8"] Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.484524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.484649 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.484795 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6z46\" (UniqueName: \"kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.589436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.589566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.589601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6z46\" (UniqueName: \"kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.594401 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.606166 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.607547 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6z46\" (UniqueName: \"kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:57 crc kubenswrapper[4985]: I0127 09:24:57.633641 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:24:58 crc kubenswrapper[4985]: I0127 09:24:58.224253 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8"] Jan 27 09:24:59 crc kubenswrapper[4985]: I0127 09:24:59.225485 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" event={"ID":"d810fad1-a264-46e4-9094-a86b77cec3c3","Type":"ContainerStarted","Data":"bd517b629428142a76f739e368194b101e73d8acb83d533202b725159e823956"} Jan 27 09:25:00 crc kubenswrapper[4985]: I0127 09:25:00.233770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" event={"ID":"d810fad1-a264-46e4-9094-a86b77cec3c3","Type":"ContainerStarted","Data":"e18251f25fbaf3c372af93e7986a9c02f0e5446120c3a91857f3ccbd0a7ac4ad"} Jan 27 09:25:00 crc kubenswrapper[4985]: I0127 09:25:00.248347 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" podStartSLOduration=2.464807956 podStartE2EDuration="3.248326955s" podCreationTimestamp="2026-01-27 09:24:57 +0000 UTC" firstStartedPulling="2026-01-27 09:24:58.228299987 +0000 UTC m=+1882.519394828" lastFinishedPulling="2026-01-27 09:24:59.011818986 +0000 UTC m=+1883.302913827" observedRunningTime="2026-01-27 09:25:00.247376198 +0000 UTC m=+1884.538471039" watchObservedRunningTime="2026-01-27 09:25:00.248326955 +0000 UTC m=+1884.539421816" Jan 27 09:25:04 crc kubenswrapper[4985]: I0127 09:25:04.452024 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:25:04 crc kubenswrapper[4985]: E0127 09:25:04.452765 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:25:18 crc kubenswrapper[4985]: I0127 09:25:18.050752 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-crzqt"] Jan 27 09:25:18 crc kubenswrapper[4985]: I0127 09:25:18.060343 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-crzqt"] Jan 27 09:25:18 crc kubenswrapper[4985]: I0127 09:25:18.463027 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f83a7b31-1947-4c74-8770-86b7a6906c1b" path="/var/lib/kubelet/pods/f83a7b31-1947-4c74-8770-86b7a6906c1b/volumes" Jan 27 09:25:19 crc kubenswrapper[4985]: I0127 09:25:19.452311 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:25:19 crc kubenswrapper[4985]: E0127 09:25:19.452690 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:25:30 crc kubenswrapper[4985]: I0127 09:25:30.453249 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:25:30 crc kubenswrapper[4985]: E0127 09:25:30.454384 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:25:44 crc kubenswrapper[4985]: I0127 09:25:44.817407 4985 scope.go:117] "RemoveContainer" containerID="2586e2ec8d0c89c8c74495be9ac91f7f21d31e3e75107fcecb56959458091589" Jan 27 09:25:45 crc kubenswrapper[4985]: I0127 09:25:45.452560 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:25:45 crc kubenswrapper[4985]: E0127 09:25:45.452973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:25:54 crc kubenswrapper[4985]: I0127 09:25:54.754228 4985 generic.go:334] "Generic (PLEG): container finished" podID="d810fad1-a264-46e4-9094-a86b77cec3c3" containerID="e18251f25fbaf3c372af93e7986a9c02f0e5446120c3a91857f3ccbd0a7ac4ad" exitCode=0 Jan 27 09:25:54 crc kubenswrapper[4985]: I0127 09:25:54.754324 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" event={"ID":"d810fad1-a264-46e4-9094-a86b77cec3c3","Type":"ContainerDied","Data":"e18251f25fbaf3c372af93e7986a9c02f0e5446120c3a91857f3ccbd0a7ac4ad"} Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.242751 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.405248 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6z46\" (UniqueName: \"kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46\") pod \"d810fad1-a264-46e4-9094-a86b77cec3c3\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.405404 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory\") pod \"d810fad1-a264-46e4-9094-a86b77cec3c3\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.405450 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam\") pod \"d810fad1-a264-46e4-9094-a86b77cec3c3\" (UID: \"d810fad1-a264-46e4-9094-a86b77cec3c3\") " Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.419063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46" (OuterVolumeSpecName: "kube-api-access-n6z46") pod "d810fad1-a264-46e4-9094-a86b77cec3c3" (UID: "d810fad1-a264-46e4-9094-a86b77cec3c3"). InnerVolumeSpecName "kube-api-access-n6z46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.431875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d810fad1-a264-46e4-9094-a86b77cec3c3" (UID: "d810fad1-a264-46e4-9094-a86b77cec3c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.448191 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory" (OuterVolumeSpecName: "inventory") pod "d810fad1-a264-46e4-9094-a86b77cec3c3" (UID: "d810fad1-a264-46e4-9094-a86b77cec3c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.508261 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6z46\" (UniqueName: \"kubernetes.io/projected/d810fad1-a264-46e4-9094-a86b77cec3c3-kube-api-access-n6z46\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.508302 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.508313 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d810fad1-a264-46e4-9094-a86b77cec3c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.773749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" event={"ID":"d810fad1-a264-46e4-9094-a86b77cec3c3","Type":"ContainerDied","Data":"bd517b629428142a76f739e368194b101e73d8acb83d533202b725159e823956"} Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.773788 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd517b629428142a76f739e368194b101e73d8acb83d533202b725159e823956" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.773880 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.883184 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hp4rl"] Jan 27 09:25:56 crc kubenswrapper[4985]: E0127 09:25:56.884344 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d810fad1-a264-46e4-9094-a86b77cec3c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.884378 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d810fad1-a264-46e4-9094-a86b77cec3c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.885771 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d810fad1-a264-46e4-9094-a86b77cec3c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.887416 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.892228 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.892745 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.893048 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.893416 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:25:56 crc kubenswrapper[4985]: I0127 09:25:56.941983 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hp4rl"] Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.018048 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.018394 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.018506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6s7q\" (UniqueName: \"kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.120121 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.120277 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.120313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6s7q\" (UniqueName: \"kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.124991 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.125497 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.137875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6s7q\" (UniqueName: \"kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q\") pod \"ssh-known-hosts-edpm-deployment-hp4rl\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.248996 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:25:57 crc kubenswrapper[4985]: I0127 09:25:57.920362 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hp4rl"] Jan 27 09:25:58 crc kubenswrapper[4985]: I0127 09:25:58.792234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" event={"ID":"c987829b-664b-4613-a0d2-04bc23b2c0bb","Type":"ContainerStarted","Data":"06b687ba67ad78d7fc824a95c4b57c1cc3be151c33425b0a1ffc3f41a564f2ea"} Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.124359 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.127292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.139012 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.263643 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.263750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.263792 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvx6d\" (UniqueName: \"kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.365966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.366047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvx6d\" (UniqueName: \"kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.366186 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.366743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.366741 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.396973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvx6d\" (UniqueName: \"kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d\") pod \"redhat-operators-tfvx4\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.448724 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.452369 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:25:59 crc kubenswrapper[4985]: E0127 09:25:59.452641 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.801375 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" event={"ID":"c987829b-664b-4613-a0d2-04bc23b2c0bb","Type":"ContainerStarted","Data":"f843b8755625b8fdb2888e0fcc4310f63082eff265d72658c44b4df531cc9a8e"} Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.820193 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" podStartSLOduration=3.155065845 podStartE2EDuration="3.820176068s" podCreationTimestamp="2026-01-27 09:25:56 +0000 UTC" firstStartedPulling="2026-01-27 09:25:57.933995021 +0000 UTC m=+1942.225089862" lastFinishedPulling="2026-01-27 09:25:58.599105244 +0000 UTC m=+1942.890200085" observedRunningTime="2026-01-27 09:25:59.81697146 +0000 UTC m=+1944.108066331" watchObservedRunningTime="2026-01-27 09:25:59.820176068 +0000 UTC m=+1944.111270899" Jan 27 09:25:59 crc kubenswrapper[4985]: I0127 09:25:59.886815 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:25:59 crc kubenswrapper[4985]: W0127 09:25:59.887469 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03798f85_696d_434c_ac93_e4277e55687f.slice/crio-e55b00e41f7f2f185f81089a02fccc01629b54b3893cc5243ac67c255ffc95b7 WatchSource:0}: Error finding container e55b00e41f7f2f185f81089a02fccc01629b54b3893cc5243ac67c255ffc95b7: Status 404 returned error can't find the container with id e55b00e41f7f2f185f81089a02fccc01629b54b3893cc5243ac67c255ffc95b7 Jan 27 09:26:00 crc kubenswrapper[4985]: I0127 09:26:00.814191 4985 generic.go:334] "Generic (PLEG): container finished" podID="03798f85-696d-434c-ac93-e4277e55687f" containerID="726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa" exitCode=0 Jan 27 09:26:00 crc kubenswrapper[4985]: I0127 09:26:00.814267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerDied","Data":"726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa"} Jan 27 09:26:00 crc kubenswrapper[4985]: I0127 09:26:00.814580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerStarted","Data":"e55b00e41f7f2f185f81089a02fccc01629b54b3893cc5243ac67c255ffc95b7"} Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.329075 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.332449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.353949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.408645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65z6x\" (UniqueName: \"kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.408901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.409002 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.511346 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65z6x\" (UniqueName: \"kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.511461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.511497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.511942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.512437 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.532662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65z6x\" (UniqueName: \"kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x\") pod \"certified-operators-2zprb\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.689061 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:01 crc kubenswrapper[4985]: I0127 09:26:01.837616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerStarted","Data":"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f"} Jan 27 09:26:02 crc kubenswrapper[4985]: I0127 09:26:02.220610 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:02 crc kubenswrapper[4985]: I0127 09:26:02.884188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerStarted","Data":"1d87e03af139ec8b024aa08d972666e93573df7d05cb2420d8bef7c41cbe31cb"} Jan 27 09:26:03 crc kubenswrapper[4985]: I0127 09:26:03.895143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerStarted","Data":"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42"} Jan 27 09:26:03 crc kubenswrapper[4985]: I0127 09:26:03.897921 4985 generic.go:334] "Generic (PLEG): container finished" podID="03798f85-696d-434c-ac93-e4277e55687f" containerID="8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f" exitCode=0 Jan 27 09:26:03 crc kubenswrapper[4985]: I0127 09:26:03.897967 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerDied","Data":"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f"} Jan 27 09:26:04 crc kubenswrapper[4985]: I0127 09:26:04.910048 4985 generic.go:334] "Generic (PLEG): container finished" podID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerID="4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42" exitCode=0 Jan 27 09:26:04 crc kubenswrapper[4985]: I0127 09:26:04.910113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerDied","Data":"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42"} Jan 27 09:26:05 crc kubenswrapper[4985]: I0127 09:26:05.925245 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerStarted","Data":"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763"} Jan 27 09:26:05 crc kubenswrapper[4985]: I0127 09:26:05.955790 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tfvx4" podStartSLOduration=2.361384076 podStartE2EDuration="6.955772197s" podCreationTimestamp="2026-01-27 09:25:59 +0000 UTC" firstStartedPulling="2026-01-27 09:26:00.81722545 +0000 UTC m=+1945.108320291" lastFinishedPulling="2026-01-27 09:26:05.411613571 +0000 UTC m=+1949.702708412" observedRunningTime="2026-01-27 09:26:05.949261388 +0000 UTC m=+1950.240356269" watchObservedRunningTime="2026-01-27 09:26:05.955772197 +0000 UTC m=+1950.246867038" Jan 27 09:26:06 crc kubenswrapper[4985]: I0127 09:26:06.938942 4985 generic.go:334] "Generic (PLEG): container finished" podID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerID="3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3" exitCode=0 Jan 27 09:26:06 crc kubenswrapper[4985]: I0127 09:26:06.939089 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerDied","Data":"3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3"} Jan 27 09:26:07 crc kubenswrapper[4985]: I0127 09:26:07.952307 4985 generic.go:334] "Generic (PLEG): container finished" podID="c987829b-664b-4613-a0d2-04bc23b2c0bb" containerID="f843b8755625b8fdb2888e0fcc4310f63082eff265d72658c44b4df531cc9a8e" exitCode=0 Jan 27 09:26:07 crc kubenswrapper[4985]: I0127 09:26:07.952479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" event={"ID":"c987829b-664b-4613-a0d2-04bc23b2c0bb","Type":"ContainerDied","Data":"f843b8755625b8fdb2888e0fcc4310f63082eff265d72658c44b4df531cc9a8e"} Jan 27 09:26:08 crc kubenswrapper[4985]: I0127 09:26:08.967111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerStarted","Data":"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f"} Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.004338 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2zprb" podStartSLOduration=4.921849943 podStartE2EDuration="8.004309602s" podCreationTimestamp="2026-01-27 09:26:01 +0000 UTC" firstStartedPulling="2026-01-27 09:26:04.912336098 +0000 UTC m=+1949.203430939" lastFinishedPulling="2026-01-27 09:26:07.994795757 +0000 UTC m=+1952.285890598" observedRunningTime="2026-01-27 09:26:09.000358894 +0000 UTC m=+1953.291453755" watchObservedRunningTime="2026-01-27 09:26:09.004309602 +0000 UTC m=+1953.295404443" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.449675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.450145 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.749870 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.805343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0\") pod \"c987829b-664b-4613-a0d2-04bc23b2c0bb\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.805617 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam\") pod \"c987829b-664b-4613-a0d2-04bc23b2c0bb\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.805674 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6s7q\" (UniqueName: \"kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q\") pod \"c987829b-664b-4613-a0d2-04bc23b2c0bb\" (UID: \"c987829b-664b-4613-a0d2-04bc23b2c0bb\") " Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.816987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q" (OuterVolumeSpecName: "kube-api-access-s6s7q") pod "c987829b-664b-4613-a0d2-04bc23b2c0bb" (UID: "c987829b-664b-4613-a0d2-04bc23b2c0bb"). InnerVolumeSpecName "kube-api-access-s6s7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.849268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c987829b-664b-4613-a0d2-04bc23b2c0bb" (UID: "c987829b-664b-4613-a0d2-04bc23b2c0bb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.855082 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c987829b-664b-4613-a0d2-04bc23b2c0bb" (UID: "c987829b-664b-4613-a0d2-04bc23b2c0bb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.908095 4985 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.908143 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c987829b-664b-4613-a0d2-04bc23b2c0bb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.908157 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6s7q\" (UniqueName: \"kubernetes.io/projected/c987829b-664b-4613-a0d2-04bc23b2c0bb-kube-api-access-s6s7q\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.982229 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.982222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hp4rl" event={"ID":"c987829b-664b-4613-a0d2-04bc23b2c0bb","Type":"ContainerDied","Data":"06b687ba67ad78d7fc824a95c4b57c1cc3be151c33425b0a1ffc3f41a564f2ea"} Jan 27 09:26:09 crc kubenswrapper[4985]: I0127 09:26:09.983396 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b687ba67ad78d7fc824a95c4b57c1cc3be151c33425b0a1ffc3f41a564f2ea" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.074288 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw"] Jan 27 09:26:10 crc kubenswrapper[4985]: E0127 09:26:10.075306 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c987829b-664b-4613-a0d2-04bc23b2c0bb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.075326 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c987829b-664b-4613-a0d2-04bc23b2c0bb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.075934 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c987829b-664b-4613-a0d2-04bc23b2c0bb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.076960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.079197 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.080829 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.081334 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.082321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.084339 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw"] Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.217161 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.217233 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.217478 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjq5n\" (UniqueName: \"kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.322410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.322582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.323555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjq5n\" (UniqueName: \"kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.327727 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.329674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.358265 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjq5n\" (UniqueName: \"kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9dqdw\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.406617 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:10 crc kubenswrapper[4985]: I0127 09:26:10.512710 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfvx4" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" probeResult="failure" output=< Jan 27 09:26:10 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 09:26:10 crc kubenswrapper[4985]: > Jan 27 09:26:10 crc kubenswrapper[4985]: W0127 09:26:10.996975 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b18f4c_b94f_4dff_b191_639e7734adb4.slice/crio-b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13 WatchSource:0}: Error finding container b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13: Status 404 returned error can't find the container with id b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13 Jan 27 09:26:11 crc kubenswrapper[4985]: I0127 09:26:11.056099 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw"] Jan 27 09:26:11 crc kubenswrapper[4985]: I0127 09:26:11.452109 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:26:11 crc kubenswrapper[4985]: E0127 09:26:11.452382 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:26:11 crc kubenswrapper[4985]: I0127 09:26:11.689992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:11 crc kubenswrapper[4985]: I0127 09:26:11.690260 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:11 crc kubenswrapper[4985]: I0127 09:26:11.742438 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:12 crc kubenswrapper[4985]: I0127 09:26:12.065342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" event={"ID":"09b18f4c-b94f-4dff-b191-639e7734adb4","Type":"ContainerStarted","Data":"b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13"} Jan 27 09:26:13 crc kubenswrapper[4985]: I0127 09:26:13.074895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" event={"ID":"09b18f4c-b94f-4dff-b191-639e7734adb4","Type":"ContainerStarted","Data":"7edb0bb7e38d30fcc90643808f6d3daaa0d3ac0d0ddcd64abc0e22ef95cc7e2b"} Jan 27 09:26:13 crc kubenswrapper[4985]: I0127 09:26:13.095196 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" podStartSLOduration=2.065737952 podStartE2EDuration="3.095176545s" podCreationTimestamp="2026-01-27 09:26:10 +0000 UTC" firstStartedPulling="2026-01-27 09:26:11.000412322 +0000 UTC m=+1955.291507183" lastFinishedPulling="2026-01-27 09:26:12.029850935 +0000 UTC m=+1956.320945776" observedRunningTime="2026-01-27 09:26:13.09315979 +0000 UTC m=+1957.384254641" watchObservedRunningTime="2026-01-27 09:26:13.095176545 +0000 UTC m=+1957.386271386" Jan 27 09:26:20 crc kubenswrapper[4985]: I0127 09:26:20.495372 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfvx4" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" probeResult="failure" output=< Jan 27 09:26:20 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 09:26:20 crc kubenswrapper[4985]: > Jan 27 09:26:21 crc kubenswrapper[4985]: I0127 09:26:21.149907 4985 generic.go:334] "Generic (PLEG): container finished" podID="09b18f4c-b94f-4dff-b191-639e7734adb4" containerID="7edb0bb7e38d30fcc90643808f6d3daaa0d3ac0d0ddcd64abc0e22ef95cc7e2b" exitCode=0 Jan 27 09:26:21 crc kubenswrapper[4985]: I0127 09:26:21.149984 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" event={"ID":"09b18f4c-b94f-4dff-b191-639e7734adb4","Type":"ContainerDied","Data":"7edb0bb7e38d30fcc90643808f6d3daaa0d3ac0d0ddcd64abc0e22ef95cc7e2b"} Jan 27 09:26:21 crc kubenswrapper[4985]: I0127 09:26:21.739320 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:21 crc kubenswrapper[4985]: I0127 09:26:21.818966 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.160498 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2zprb" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="registry-server" containerID="cri-o://e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f" gracePeriod=2 Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.810262 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.819122 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.852765 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65z6x\" (UniqueName: \"kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x\") pod \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.852818 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjq5n\" (UniqueName: \"kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n\") pod \"09b18f4c-b94f-4dff-b191-639e7734adb4\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.852858 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities\") pod \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.852901 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam\") pod \"09b18f4c-b94f-4dff-b191-639e7734adb4\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.852927 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory\") pod \"09b18f4c-b94f-4dff-b191-639e7734adb4\" (UID: \"09b18f4c-b94f-4dff-b191-639e7734adb4\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.853093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content\") pod \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\" (UID: \"1446c56b-735b-42bc-9c5f-04c55b11bf4c\") " Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.863243 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities" (OuterVolumeSpecName: "utilities") pod "1446c56b-735b-42bc-9c5f-04c55b11bf4c" (UID: "1446c56b-735b-42bc-9c5f-04c55b11bf4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.892454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x" (OuterVolumeSpecName: "kube-api-access-65z6x") pod "1446c56b-735b-42bc-9c5f-04c55b11bf4c" (UID: "1446c56b-735b-42bc-9c5f-04c55b11bf4c"). InnerVolumeSpecName "kube-api-access-65z6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.896892 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n" (OuterVolumeSpecName: "kube-api-access-rjq5n") pod "09b18f4c-b94f-4dff-b191-639e7734adb4" (UID: "09b18f4c-b94f-4dff-b191-639e7734adb4"). InnerVolumeSpecName "kube-api-access-rjq5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.911966 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09b18f4c-b94f-4dff-b191-639e7734adb4" (UID: "09b18f4c-b94f-4dff-b191-639e7734adb4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.915200 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1446c56b-735b-42bc-9c5f-04c55b11bf4c" (UID: "1446c56b-735b-42bc-9c5f-04c55b11bf4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.920268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory" (OuterVolumeSpecName: "inventory") pod "09b18f4c-b94f-4dff-b191-639e7734adb4" (UID: "09b18f4c-b94f-4dff-b191-639e7734adb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954778 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954830 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65z6x\" (UniqueName: \"kubernetes.io/projected/1446c56b-735b-42bc-9c5f-04c55b11bf4c-kube-api-access-65z6x\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954844 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjq5n\" (UniqueName: \"kubernetes.io/projected/09b18f4c-b94f-4dff-b191-639e7734adb4-kube-api-access-rjq5n\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954855 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1446c56b-735b-42bc-9c5f-04c55b11bf4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954869 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:22 crc kubenswrapper[4985]: I0127 09:26:22.954880 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09b18f4c-b94f-4dff-b191-639e7734adb4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.171573 4985 generic.go:334] "Generic (PLEG): container finished" podID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerID="e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f" exitCode=0 Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.171682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerDied","Data":"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f"} Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.171741 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zprb" event={"ID":"1446c56b-735b-42bc-9c5f-04c55b11bf4c","Type":"ContainerDied","Data":"1d87e03af139ec8b024aa08d972666e93573df7d05cb2420d8bef7c41cbe31cb"} Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.171768 4985 scope.go:117] "RemoveContainer" containerID="e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.172161 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zprb" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.174268 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" event={"ID":"09b18f4c-b94f-4dff-b191-639e7734adb4","Type":"ContainerDied","Data":"b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13"} Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.174307 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7922e96c12a0f06ae2bef88e234e58c4a4f3d775944fb90e40ae119e0854e13" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.174372 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9dqdw" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.197672 4985 scope.go:117] "RemoveContainer" containerID="3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.235610 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.243951 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2zprb"] Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.244929 4985 scope.go:117] "RemoveContainer" containerID="4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.268359 4985 scope.go:117] "RemoveContainer" containerID="e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.268996 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f\": container with ID starting with e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f not found: ID does not exist" containerID="e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.269032 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f"} err="failed to get container status \"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f\": rpc error: code = NotFound desc = could not find container \"e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f\": container with ID starting with e15f1fb00fda54caa4b4f587fc57ea8884261b6e33b84b8a74b43613ee1b677f not found: ID does not exist" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.269059 4985 scope.go:117] "RemoveContainer" containerID="3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.273821 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3\": container with ID starting with 3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3 not found: ID does not exist" containerID="3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.273858 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3"} err="failed to get container status \"3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3\": rpc error: code = NotFound desc = could not find container \"3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3\": container with ID starting with 3a20ee514075ffc6260f90d7d3d28c6bf1ef5af7a99543c014b9606e9614dbb3 not found: ID does not exist" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.273878 4985 scope.go:117] "RemoveContainer" containerID="4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.274555 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42\": container with ID starting with 4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42 not found: ID does not exist" containerID="4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.274585 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42"} err="failed to get container status \"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42\": rpc error: code = NotFound desc = could not find container \"4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42\": container with ID starting with 4438f4675cb5a50a0a9fbe698adc419d34f8c6143d17ac0c348adc8c70011e42 not found: ID does not exist" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327107 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n"] Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.327497 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="extract-utilities" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327535 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="extract-utilities" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.327561 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b18f4c-b94f-4dff-b191-639e7734adb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327573 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b18f4c-b94f-4dff-b191-639e7734adb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.327590 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="extract-content" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327597 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="extract-content" Jan 27 09:26:23 crc kubenswrapper[4985]: E0127 09:26:23.327620 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="registry-server" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327626 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="registry-server" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327825 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" containerName="registry-server" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.327847 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b18f4c-b94f-4dff-b191-639e7734adb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.328481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.331281 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.332420 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.332427 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.333038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.347773 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n"] Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.464345 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9jb\" (UniqueName: \"kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.464499 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.464750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.566767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.566853 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9jb\" (UniqueName: \"kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.566935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.573473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.574145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.596047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9jb\" (UniqueName: \"kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:23 crc kubenswrapper[4985]: I0127 09:26:23.682037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:24 crc kubenswrapper[4985]: I0127 09:26:24.245807 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n"] Jan 27 09:26:24 crc kubenswrapper[4985]: W0127 09:26:24.254079 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e0044e_7ce9_4e14_ad79_112dba0165a5.slice/crio-30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7 WatchSource:0}: Error finding container 30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7: Status 404 returned error can't find the container with id 30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7 Jan 27 09:26:24 crc kubenswrapper[4985]: I0127 09:26:24.258531 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:26:24 crc kubenswrapper[4985]: I0127 09:26:24.452431 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:26:24 crc kubenswrapper[4985]: E0127 09:26:24.452863 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:26:24 crc kubenswrapper[4985]: I0127 09:26:24.468680 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1446c56b-735b-42bc-9c5f-04c55b11bf4c" path="/var/lib/kubelet/pods/1446c56b-735b-42bc-9c5f-04c55b11bf4c/volumes" Jan 27 09:26:25 crc kubenswrapper[4985]: I0127 09:26:25.200388 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" event={"ID":"a4e0044e-7ce9-4e14-ad79-112dba0165a5","Type":"ContainerStarted","Data":"dd245f880de8dab8ac584bc462143c48fb2c70704441ca2dc8197fc55e09f6d0"} Jan 27 09:26:25 crc kubenswrapper[4985]: I0127 09:26:25.201069 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" event={"ID":"a4e0044e-7ce9-4e14-ad79-112dba0165a5","Type":"ContainerStarted","Data":"30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7"} Jan 27 09:26:25 crc kubenswrapper[4985]: I0127 09:26:25.237234 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" podStartSLOduration=1.788256755 podStartE2EDuration="2.237204833s" podCreationTimestamp="2026-01-27 09:26:23 +0000 UTC" firstStartedPulling="2026-01-27 09:26:24.258203897 +0000 UTC m=+1968.549298738" lastFinishedPulling="2026-01-27 09:26:24.707151975 +0000 UTC m=+1968.998246816" observedRunningTime="2026-01-27 09:26:25.219583788 +0000 UTC m=+1969.510678649" watchObservedRunningTime="2026-01-27 09:26:25.237204833 +0000 UTC m=+1969.528299684" Jan 27 09:26:30 crc kubenswrapper[4985]: I0127 09:26:30.507168 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfvx4" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" probeResult="failure" output=< Jan 27 09:26:30 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 09:26:30 crc kubenswrapper[4985]: > Jan 27 09:26:35 crc kubenswrapper[4985]: I0127 09:26:35.290859 4985 generic.go:334] "Generic (PLEG): container finished" podID="a4e0044e-7ce9-4e14-ad79-112dba0165a5" containerID="dd245f880de8dab8ac584bc462143c48fb2c70704441ca2dc8197fc55e09f6d0" exitCode=0 Jan 27 09:26:35 crc kubenswrapper[4985]: I0127 09:26:35.290979 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" event={"ID":"a4e0044e-7ce9-4e14-ad79-112dba0165a5","Type":"ContainerDied","Data":"dd245f880de8dab8ac584bc462143c48fb2c70704441ca2dc8197fc55e09f6d0"} Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.749961 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.857228 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam\") pod \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.857464 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x9jb\" (UniqueName: \"kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb\") pod \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.857594 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory\") pod \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\" (UID: \"a4e0044e-7ce9-4e14-ad79-112dba0165a5\") " Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.863781 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb" (OuterVolumeSpecName: "kube-api-access-6x9jb") pod "a4e0044e-7ce9-4e14-ad79-112dba0165a5" (UID: "a4e0044e-7ce9-4e14-ad79-112dba0165a5"). InnerVolumeSpecName "kube-api-access-6x9jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.885960 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4e0044e-7ce9-4e14-ad79-112dba0165a5" (UID: "a4e0044e-7ce9-4e14-ad79-112dba0165a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.887307 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory" (OuterVolumeSpecName: "inventory") pod "a4e0044e-7ce9-4e14-ad79-112dba0165a5" (UID: "a4e0044e-7ce9-4e14-ad79-112dba0165a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.960277 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.960308 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x9jb\" (UniqueName: \"kubernetes.io/projected/a4e0044e-7ce9-4e14-ad79-112dba0165a5-kube-api-access-6x9jb\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:36 crc kubenswrapper[4985]: I0127 09:26:36.960319 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4e0044e-7ce9-4e14-ad79-112dba0165a5-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.314647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" event={"ID":"a4e0044e-7ce9-4e14-ad79-112dba0165a5","Type":"ContainerDied","Data":"30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7"} Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.314686 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30965e60bb4fd576396c00f8af05ff54d895be76387bf629b36142a1286fabd7" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.314694 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.452682 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:26:37 crc kubenswrapper[4985]: E0127 09:26:37.453054 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.481530 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds"] Jan 27 09:26:37 crc kubenswrapper[4985]: E0127 09:26:37.482271 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e0044e-7ce9-4e14-ad79-112dba0165a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.482355 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e0044e-7ce9-4e14-ad79-112dba0165a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.482628 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e0044e-7ce9-4e14-ad79-112dba0165a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.483452 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.486442 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.486791 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.486963 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.487258 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.487402 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.487607 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.496854 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.497371 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.538412 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds"] Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571832 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571946 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.571996 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572050 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572081 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572122 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572148 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572184 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.572296 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qph9\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675728 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675864 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675892 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675916 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.675966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.676708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.676782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.676817 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.676884 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.676956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.677011 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.677042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qph9\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.677108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.683156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.683292 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.683420 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.683981 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.684181 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.685115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.693098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.693108 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.693186 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.693231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.693972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.695252 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.711748 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qph9\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.712555 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:37 crc kubenswrapper[4985]: I0127 09:26:37.801308 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:26:38 crc kubenswrapper[4985]: I0127 09:26:38.336415 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds"] Jan 27 09:26:38 crc kubenswrapper[4985]: W0127 09:26:38.338959 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fc556_0d48_4993_a66c_a48eac7a023c.slice/crio-88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482 WatchSource:0}: Error finding container 88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482: Status 404 returned error can't find the container with id 88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482 Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.338000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" event={"ID":"bc2fc556-0d48-4993-a66c-a48eac7a023c","Type":"ContainerStarted","Data":"54219feb894fcf3f7321b4f446f5f9fb283b7780f42735f95dc8285087694aa2"} Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.338650 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" event={"ID":"bc2fc556-0d48-4993-a66c-a48eac7a023c","Type":"ContainerStarted","Data":"88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482"} Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.361148 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" podStartSLOduration=1.865658071 podStartE2EDuration="2.361128018s" podCreationTimestamp="2026-01-27 09:26:37 +0000 UTC" firstStartedPulling="2026-01-27 09:26:38.341535875 +0000 UTC m=+1982.632630716" lastFinishedPulling="2026-01-27 09:26:38.837005812 +0000 UTC m=+1983.128100663" observedRunningTime="2026-01-27 09:26:39.356357206 +0000 UTC m=+1983.647452047" watchObservedRunningTime="2026-01-27 09:26:39.361128018 +0000 UTC m=+1983.652222879" Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.497226 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.544425 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:39 crc kubenswrapper[4985]: I0127 09:26:39.738436 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.382287 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tfvx4" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" containerID="cri-o://2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763" gracePeriod=2 Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.813724 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.962481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content\") pod \"03798f85-696d-434c-ac93-e4277e55687f\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.962945 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities\") pod \"03798f85-696d-434c-ac93-e4277e55687f\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.963043 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvx6d\" (UniqueName: \"kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d\") pod \"03798f85-696d-434c-ac93-e4277e55687f\" (UID: \"03798f85-696d-434c-ac93-e4277e55687f\") " Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.963585 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities" (OuterVolumeSpecName: "utilities") pod "03798f85-696d-434c-ac93-e4277e55687f" (UID: "03798f85-696d-434c-ac93-e4277e55687f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:26:41 crc kubenswrapper[4985]: I0127 09:26:41.973716 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d" (OuterVolumeSpecName: "kube-api-access-cvx6d") pod "03798f85-696d-434c-ac93-e4277e55687f" (UID: "03798f85-696d-434c-ac93-e4277e55687f"). InnerVolumeSpecName "kube-api-access-cvx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.066063 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvx6d\" (UniqueName: \"kubernetes.io/projected/03798f85-696d-434c-ac93-e4277e55687f-kube-api-access-cvx6d\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.066117 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.083003 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03798f85-696d-434c-ac93-e4277e55687f" (UID: "03798f85-696d-434c-ac93-e4277e55687f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.167557 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03798f85-696d-434c-ac93-e4277e55687f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.391453 4985 generic.go:334] "Generic (PLEG): container finished" podID="03798f85-696d-434c-ac93-e4277e55687f" containerID="2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763" exitCode=0 Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.391501 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerDied","Data":"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763"} Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.391549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfvx4" event={"ID":"03798f85-696d-434c-ac93-e4277e55687f","Type":"ContainerDied","Data":"e55b00e41f7f2f185f81089a02fccc01629b54b3893cc5243ac67c255ffc95b7"} Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.391546 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfvx4" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.391630 4985 scope.go:117] "RemoveContainer" containerID="2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.415812 4985 scope.go:117] "RemoveContainer" containerID="8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.434188 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.442686 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tfvx4"] Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.463331 4985 scope.go:117] "RemoveContainer" containerID="726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.463692 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03798f85-696d-434c-ac93-e4277e55687f" path="/var/lib/kubelet/pods/03798f85-696d-434c-ac93-e4277e55687f/volumes" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.489961 4985 scope.go:117] "RemoveContainer" containerID="2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763" Jan 27 09:26:42 crc kubenswrapper[4985]: E0127 09:26:42.490386 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763\": container with ID starting with 2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763 not found: ID does not exist" containerID="2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.490419 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763"} err="failed to get container status \"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763\": rpc error: code = NotFound desc = could not find container \"2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763\": container with ID starting with 2987049d732e69e16597bbe962b1c3c7e6237718301f5ce87d07744a1b54b763 not found: ID does not exist" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.490442 4985 scope.go:117] "RemoveContainer" containerID="8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f" Jan 27 09:26:42 crc kubenswrapper[4985]: E0127 09:26:42.490765 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f\": container with ID starting with 8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f not found: ID does not exist" containerID="8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.490824 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f"} err="failed to get container status \"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f\": rpc error: code = NotFound desc = could not find container \"8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f\": container with ID starting with 8229c160fffe5b7a28fee72e801eebcdadc7cde8fa139f54d4272bb514421c3f not found: ID does not exist" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.490852 4985 scope.go:117] "RemoveContainer" containerID="726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa" Jan 27 09:26:42 crc kubenswrapper[4985]: E0127 09:26:42.491137 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa\": container with ID starting with 726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa not found: ID does not exist" containerID="726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa" Jan 27 09:26:42 crc kubenswrapper[4985]: I0127 09:26:42.491174 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa"} err="failed to get container status \"726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa\": rpc error: code = NotFound desc = could not find container \"726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa\": container with ID starting with 726b55001406ab6fe0a97e344eac7f7e7c0a58874e48ed8c87d400ee60cab1aa not found: ID does not exist" Jan 27 09:26:52 crc kubenswrapper[4985]: I0127 09:26:52.452576 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:26:53 crc kubenswrapper[4985]: I0127 09:26:53.512540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154"} Jan 27 09:27:17 crc kubenswrapper[4985]: I0127 09:27:17.788212 4985 generic.go:334] "Generic (PLEG): container finished" podID="bc2fc556-0d48-4993-a66c-a48eac7a023c" containerID="54219feb894fcf3f7321b4f446f5f9fb283b7780f42735f95dc8285087694aa2" exitCode=0 Jan 27 09:27:17 crc kubenswrapper[4985]: I0127 09:27:17.788435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" event={"ID":"bc2fc556-0d48-4993-a66c-a48eac7a023c","Type":"ContainerDied","Data":"54219feb894fcf3f7321b4f446f5f9fb283b7780f42735f95dc8285087694aa2"} Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.202500 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298594 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298664 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qph9\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298750 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298774 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298808 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298881 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298938 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.298991 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.299038 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.299070 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.299142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle\") pod \"bc2fc556-0d48-4993-a66c-a48eac7a023c\" (UID: \"bc2fc556-0d48-4993-a66c-a48eac7a023c\") " Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.306801 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.306846 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.307915 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.308292 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.308445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.309263 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.309283 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.310288 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.310595 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.311138 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.311186 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9" (OuterVolumeSpecName: "kube-api-access-4qph9") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "kube-api-access-4qph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.312097 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.334048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.352987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory" (OuterVolumeSpecName: "inventory") pod "bc2fc556-0d48-4993-a66c-a48eac7a023c" (UID: "bc2fc556-0d48-4993-a66c-a48eac7a023c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.402085 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.402418 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.402557 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qph9\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-kube-api-access-4qph9\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403060 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403143 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403211 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403328 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403412 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403481 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403583 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403659 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403735 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bc2fc556-0d48-4993-a66c-a48eac7a023c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403805 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.403878 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fc556-0d48-4993-a66c-a48eac7a023c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.807616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" event={"ID":"bc2fc556-0d48-4993-a66c-a48eac7a023c","Type":"ContainerDied","Data":"88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482"} Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.807915 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c7922506d144e9aedd9309c4f96660948e78d7a5f6fa62cc90d1ff04789482" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.807662 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906144 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d"] Jan 27 09:27:19 crc kubenswrapper[4985]: E0127 09:27:19.906657 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fc556-0d48-4993-a66c-a48eac7a023c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906691 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fc556-0d48-4993-a66c-a48eac7a023c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 09:27:19 crc kubenswrapper[4985]: E0127 09:27:19.906710 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="extract-content" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906717 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="extract-content" Jan 27 09:27:19 crc kubenswrapper[4985]: E0127 09:27:19.906738 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="extract-utilities" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906748 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="extract-utilities" Jan 27 09:27:19 crc kubenswrapper[4985]: E0127 09:27:19.906772 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906780 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.906984 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="03798f85-696d-434c-ac93-e4277e55687f" containerName="registry-server" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.907017 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fc556-0d48-4993-a66c-a48eac7a023c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.907963 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.911028 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.911338 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.911428 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.912492 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.912660 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:27:19 crc kubenswrapper[4985]: I0127 09:27:19.917763 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d"] Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.014203 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.014304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmbl\" (UniqueName: \"kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.014392 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.014679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.015043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.117176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.117256 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.117285 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmbl\" (UniqueName: \"kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.117313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.117364 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.118321 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.122149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.122490 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.126450 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.133778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmbl\" (UniqueName: \"kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rmw8d\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.233806 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.772369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d"] Jan 27 09:27:20 crc kubenswrapper[4985]: I0127 09:27:20.818791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" event={"ID":"db95adac-f4d3-476c-9273-82e0991a7dd2","Type":"ContainerStarted","Data":"3f52e814f357303d315e5c6e8564cd1b78d46d7222d1e3bddf8e33c6b5202c4d"} Jan 27 09:27:21 crc kubenswrapper[4985]: I0127 09:27:21.828232 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" event={"ID":"db95adac-f4d3-476c-9273-82e0991a7dd2","Type":"ContainerStarted","Data":"68c7d83959982f30ae943d4d423e4d1594f85cdd064853e906d9ab36025aa9c4"} Jan 27 09:27:21 crc kubenswrapper[4985]: I0127 09:27:21.846933 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" podStartSLOduration=2.301425991 podStartE2EDuration="2.846914633s" podCreationTimestamp="2026-01-27 09:27:19 +0000 UTC" firstStartedPulling="2026-01-27 09:27:20.77390699 +0000 UTC m=+2025.065001831" lastFinishedPulling="2026-01-27 09:27:21.319395632 +0000 UTC m=+2025.610490473" observedRunningTime="2026-01-27 09:27:21.845249807 +0000 UTC m=+2026.136344648" watchObservedRunningTime="2026-01-27 09:27:21.846914633 +0000 UTC m=+2026.138009474" Jan 27 09:28:27 crc kubenswrapper[4985]: I0127 09:28:27.608835 4985 generic.go:334] "Generic (PLEG): container finished" podID="db95adac-f4d3-476c-9273-82e0991a7dd2" containerID="68c7d83959982f30ae943d4d423e4d1594f85cdd064853e906d9ab36025aa9c4" exitCode=0 Jan 27 09:28:27 crc kubenswrapper[4985]: I0127 09:28:27.608915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" event={"ID":"db95adac-f4d3-476c-9273-82e0991a7dd2","Type":"ContainerDied","Data":"68c7d83959982f30ae943d4d423e4d1594f85cdd064853e906d9ab36025aa9c4"} Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.102728 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.184355 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle\") pod \"db95adac-f4d3-476c-9273-82e0991a7dd2\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.184728 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory\") pod \"db95adac-f4d3-476c-9273-82e0991a7dd2\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.185379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0\") pod \"db95adac-f4d3-476c-9273-82e0991a7dd2\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.185577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam\") pod \"db95adac-f4d3-476c-9273-82e0991a7dd2\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.185655 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dmbl\" (UniqueName: \"kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl\") pod \"db95adac-f4d3-476c-9273-82e0991a7dd2\" (UID: \"db95adac-f4d3-476c-9273-82e0991a7dd2\") " Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.198021 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "db95adac-f4d3-476c-9273-82e0991a7dd2" (UID: "db95adac-f4d3-476c-9273-82e0991a7dd2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.229108 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl" (OuterVolumeSpecName: "kube-api-access-8dmbl") pod "db95adac-f4d3-476c-9273-82e0991a7dd2" (UID: "db95adac-f4d3-476c-9273-82e0991a7dd2"). InnerVolumeSpecName "kube-api-access-8dmbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.283851 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "db95adac-f4d3-476c-9273-82e0991a7dd2" (UID: "db95adac-f4d3-476c-9273-82e0991a7dd2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.288293 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dmbl\" (UniqueName: \"kubernetes.io/projected/db95adac-f4d3-476c-9273-82e0991a7dd2-kube-api-access-8dmbl\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.288331 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.288344 4985 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db95adac-f4d3-476c-9273-82e0991a7dd2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.290191 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory" (OuterVolumeSpecName: "inventory") pod "db95adac-f4d3-476c-9273-82e0991a7dd2" (UID: "db95adac-f4d3-476c-9273-82e0991a7dd2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.290233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db95adac-f4d3-476c-9273-82e0991a7dd2" (UID: "db95adac-f4d3-476c-9273-82e0991a7dd2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.390346 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.390389 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db95adac-f4d3-476c-9273-82e0991a7dd2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.632433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" event={"ID":"db95adac-f4d3-476c-9273-82e0991a7dd2","Type":"ContainerDied","Data":"3f52e814f357303d315e5c6e8564cd1b78d46d7222d1e3bddf8e33c6b5202c4d"} Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.632781 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f52e814f357303d315e5c6e8564cd1b78d46d7222d1e3bddf8e33c6b5202c4d" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.632485 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rmw8d" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.730929 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt"] Jan 27 09:28:29 crc kubenswrapper[4985]: E0127 09:28:29.731388 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db95adac-f4d3-476c-9273-82e0991a7dd2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.731410 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="db95adac-f4d3-476c-9273-82e0991a7dd2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.731708 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="db95adac-f4d3-476c-9273-82e0991a7dd2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.732443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.735152 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.735172 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.735475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.736220 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.737009 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.737800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.769431 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt"] Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795800 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njq6\" (UniqueName: \"kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795836 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.795857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njq6\" (UniqueName: \"kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897274 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.897457 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.908404 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.908438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.908441 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.908555 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.908852 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:29 crc kubenswrapper[4985]: I0127 09:28:29.915030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njq6\" (UniqueName: \"kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:30 crc kubenswrapper[4985]: I0127 09:28:30.057252 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:28:30 crc kubenswrapper[4985]: I0127 09:28:30.569558 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt"] Jan 27 09:28:30 crc kubenswrapper[4985]: I0127 09:28:30.642322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" event={"ID":"0cb09792-906c-423f-8567-cc13f8f3d403","Type":"ContainerStarted","Data":"3978a1229817268b5343c1d8b70a9182637f22424755ad1e10d4b260e7fb1645"} Jan 27 09:28:31 crc kubenswrapper[4985]: I0127 09:28:31.651314 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" event={"ID":"0cb09792-906c-423f-8567-cc13f8f3d403","Type":"ContainerStarted","Data":"42f138bae8bbc95aca1e5b0088df1f8bacfa9550f8d23ab8f7d0d5e88810de62"} Jan 27 09:28:31 crc kubenswrapper[4985]: I0127 09:28:31.669504 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" podStartSLOduration=1.966659226 podStartE2EDuration="2.669484879s" podCreationTimestamp="2026-01-27 09:28:29 +0000 UTC" firstStartedPulling="2026-01-27 09:28:30.578284878 +0000 UTC m=+2094.869379719" lastFinishedPulling="2026-01-27 09:28:31.281110491 +0000 UTC m=+2095.572205372" observedRunningTime="2026-01-27 09:28:31.665943723 +0000 UTC m=+2095.957038574" watchObservedRunningTime="2026-01-27 09:28:31.669484879 +0000 UTC m=+2095.960579720" Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.829465 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.832467 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.868238 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.967273 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.967341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbgz\" (UniqueName: \"kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:55 crc kubenswrapper[4985]: I0127 09:28:55.967567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.069242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.069289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkbgz\" (UniqueName: \"kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.069338 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.069826 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.070106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.090529 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkbgz\" (UniqueName: \"kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz\") pod \"redhat-marketplace-pspzg\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.166849 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.693361 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:28:56 crc kubenswrapper[4985]: I0127 09:28:56.904423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerStarted","Data":"bbe53761f7a8a2da2ade981b0436cf0dcb3c6e9a1422f03636336ed64d6c154c"} Jan 27 09:28:57 crc kubenswrapper[4985]: I0127 09:28:57.913856 4985 generic.go:334] "Generic (PLEG): container finished" podID="51dc284e-05f8-495c-8994-a8bf565ea270" containerID="98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297" exitCode=0 Jan 27 09:28:57 crc kubenswrapper[4985]: I0127 09:28:57.914150 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerDied","Data":"98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297"} Jan 27 09:28:58 crc kubenswrapper[4985]: I0127 09:28:58.931827 4985 generic.go:334] "Generic (PLEG): container finished" podID="51dc284e-05f8-495c-8994-a8bf565ea270" containerID="2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f" exitCode=0 Jan 27 09:28:58 crc kubenswrapper[4985]: I0127 09:28:58.931977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerDied","Data":"2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f"} Jan 27 09:28:59 crc kubenswrapper[4985]: I0127 09:28:59.945222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerStarted","Data":"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297"} Jan 27 09:28:59 crc kubenswrapper[4985]: I0127 09:28:59.972544 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pspzg" podStartSLOduration=3.523701568 podStartE2EDuration="4.972494914s" podCreationTimestamp="2026-01-27 09:28:55 +0000 UTC" firstStartedPulling="2026-01-27 09:28:57.916400832 +0000 UTC m=+2122.207495673" lastFinishedPulling="2026-01-27 09:28:59.365194178 +0000 UTC m=+2123.656289019" observedRunningTime="2026-01-27 09:28:59.967676792 +0000 UTC m=+2124.258771643" watchObservedRunningTime="2026-01-27 09:28:59.972494914 +0000 UTC m=+2124.263589765" Jan 27 09:29:06 crc kubenswrapper[4985]: I0127 09:29:06.167168 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:06 crc kubenswrapper[4985]: I0127 09:29:06.167992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:06 crc kubenswrapper[4985]: I0127 09:29:06.219222 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:07 crc kubenswrapper[4985]: I0127 09:29:07.073205 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:07 crc kubenswrapper[4985]: I0127 09:29:07.131407 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:29:09 crc kubenswrapper[4985]: I0127 09:29:09.036679 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pspzg" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="registry-server" containerID="cri-o://07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297" gracePeriod=2 Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.039981 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.047922 4985 generic.go:334] "Generic (PLEG): container finished" podID="51dc284e-05f8-495c-8994-a8bf565ea270" containerID="07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297" exitCode=0 Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.047969 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pspzg" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.047988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerDied","Data":"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297"} Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.048401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pspzg" event={"ID":"51dc284e-05f8-495c-8994-a8bf565ea270","Type":"ContainerDied","Data":"bbe53761f7a8a2da2ade981b0436cf0dcb3c6e9a1422f03636336ed64d6c154c"} Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.048446 4985 scope.go:117] "RemoveContainer" containerID="07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.071955 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content\") pod \"51dc284e-05f8-495c-8994-a8bf565ea270\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.072293 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities\") pod \"51dc284e-05f8-495c-8994-a8bf565ea270\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.072389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkbgz\" (UniqueName: \"kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz\") pod \"51dc284e-05f8-495c-8994-a8bf565ea270\" (UID: \"51dc284e-05f8-495c-8994-a8bf565ea270\") " Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.074270 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities" (OuterVolumeSpecName: "utilities") pod "51dc284e-05f8-495c-8994-a8bf565ea270" (UID: "51dc284e-05f8-495c-8994-a8bf565ea270"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.076348 4985 scope.go:117] "RemoveContainer" containerID="2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.090889 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz" (OuterVolumeSpecName: "kube-api-access-lkbgz") pod "51dc284e-05f8-495c-8994-a8bf565ea270" (UID: "51dc284e-05f8-495c-8994-a8bf565ea270"). InnerVolumeSpecName "kube-api-access-lkbgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.099374 4985 scope.go:117] "RemoveContainer" containerID="98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.111142 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51dc284e-05f8-495c-8994-a8bf565ea270" (UID: "51dc284e-05f8-495c-8994-a8bf565ea270"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.174486 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.174535 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkbgz\" (UniqueName: \"kubernetes.io/projected/51dc284e-05f8-495c-8994-a8bf565ea270-kube-api-access-lkbgz\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.174545 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51dc284e-05f8-495c-8994-a8bf565ea270-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.182462 4985 scope.go:117] "RemoveContainer" containerID="07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297" Jan 27 09:29:10 crc kubenswrapper[4985]: E0127 09:29:10.183016 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297\": container with ID starting with 07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297 not found: ID does not exist" containerID="07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.183098 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297"} err="failed to get container status \"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297\": rpc error: code = NotFound desc = could not find container \"07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297\": container with ID starting with 07385cab8b5bb7e35faa74e0479a4ab6a3d2d267dbffba0f2cca92dabea4b297 not found: ID does not exist" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.183158 4985 scope.go:117] "RemoveContainer" containerID="2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f" Jan 27 09:29:10 crc kubenswrapper[4985]: E0127 09:29:10.183530 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f\": container with ID starting with 2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f not found: ID does not exist" containerID="2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.183575 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f"} err="failed to get container status \"2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f\": rpc error: code = NotFound desc = could not find container \"2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f\": container with ID starting with 2508cbbf3de00851b72ca98bbc330806b32c54ad45f7c0e4b7cbcae56ca3774f not found: ID does not exist" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.183608 4985 scope.go:117] "RemoveContainer" containerID="98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297" Jan 27 09:29:10 crc kubenswrapper[4985]: E0127 09:29:10.184052 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297\": container with ID starting with 98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297 not found: ID does not exist" containerID="98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.184085 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297"} err="failed to get container status \"98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297\": rpc error: code = NotFound desc = could not find container \"98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297\": container with ID starting with 98af1a861bb52db88e125d317486be439dfb102bd65f0f74361d053df94db297 not found: ID does not exist" Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.383317 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.390233 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pspzg"] Jan 27 09:29:10 crc kubenswrapper[4985]: I0127 09:29:10.463002 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" path="/var/lib/kubelet/pods/51dc284e-05f8-495c-8994-a8bf565ea270/volumes" Jan 27 09:29:11 crc kubenswrapper[4985]: I0127 09:29:11.828652 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:29:11 crc kubenswrapper[4985]: I0127 09:29:11.829299 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:29:22 crc kubenswrapper[4985]: I0127 09:29:22.164509 4985 generic.go:334] "Generic (PLEG): container finished" podID="0cb09792-906c-423f-8567-cc13f8f3d403" containerID="42f138bae8bbc95aca1e5b0088df1f8bacfa9550f8d23ab8f7d0d5e88810de62" exitCode=0 Jan 27 09:29:22 crc kubenswrapper[4985]: I0127 09:29:22.164577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" event={"ID":"0cb09792-906c-423f-8567-cc13f8f3d403","Type":"ContainerDied","Data":"42f138bae8bbc95aca1e5b0088df1f8bacfa9550f8d23ab8f7d0d5e88810de62"} Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.620129 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671206 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671384 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671445 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njq6\" (UniqueName: \"kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.671588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory\") pod \"0cb09792-906c-423f-8567-cc13f8f3d403\" (UID: \"0cb09792-906c-423f-8567-cc13f8f3d403\") " Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.695184 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6" (OuterVolumeSpecName: "kube-api-access-7njq6") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "kube-api-access-7njq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.696594 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.709012 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.714967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.716594 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory" (OuterVolumeSpecName: "inventory") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.722731 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0cb09792-906c-423f-8567-cc13f8f3d403" (UID: "0cb09792-906c-423f-8567-cc13f8f3d403"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775053 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775681 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775707 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775729 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775749 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb09792-906c-423f-8567-cc13f8f3d403-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:23 crc kubenswrapper[4985]: I0127 09:29:23.775769 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njq6\" (UniqueName: \"kubernetes.io/projected/0cb09792-906c-423f-8567-cc13f8f3d403-kube-api-access-7njq6\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.188106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" event={"ID":"0cb09792-906c-423f-8567-cc13f8f3d403","Type":"ContainerDied","Data":"3978a1229817268b5343c1d8b70a9182637f22424755ad1e10d4b260e7fb1645"} Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.188164 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3978a1229817268b5343c1d8b70a9182637f22424755ad1e10d4b260e7fb1645" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.188258 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.443296 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7"] Jan 27 09:29:24 crc kubenswrapper[4985]: E0127 09:29:24.444389 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="extract-content" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.444558 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="extract-content" Jan 27 09:29:24 crc kubenswrapper[4985]: E0127 09:29:24.444690 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cb09792-906c-423f-8567-cc13f8f3d403" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.444781 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cb09792-906c-423f-8567-cc13f8f3d403" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 09:29:24 crc kubenswrapper[4985]: E0127 09:29:24.444908 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="registry-server" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.444985 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="registry-server" Jan 27 09:29:24 crc kubenswrapper[4985]: E0127 09:29:24.445066 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="extract-utilities" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.445139 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="extract-utilities" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.445562 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="51dc284e-05f8-495c-8994-a8bf565ea270" containerName="registry-server" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.445711 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cb09792-906c-423f-8567-cc13f8f3d403" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.446841 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.451345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.451365 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.451769 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.454404 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.456026 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.467198 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7"] Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.490676 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.490860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.490913 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.490945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.491068 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdm7\" (UniqueName: \"kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.593007 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.593064 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.593091 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.593159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdm7\" (UniqueName: \"kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.593197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.598857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.599174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.599380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.606324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.621133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdm7\" (UniqueName: \"kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:24 crc kubenswrapper[4985]: I0127 09:29:24.770973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:29:25 crc kubenswrapper[4985]: I0127 09:29:25.346772 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7"] Jan 27 09:29:26 crc kubenswrapper[4985]: I0127 09:29:26.218786 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" event={"ID":"b45b73b0-334f-456f-9a7e-be4337f5a0d1","Type":"ContainerStarted","Data":"37864be275227dc244ff6c8a10160876d353eae9185b13863c241498d7795602"} Jan 27 09:29:26 crc kubenswrapper[4985]: I0127 09:29:26.219393 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" event={"ID":"b45b73b0-334f-456f-9a7e-be4337f5a0d1","Type":"ContainerStarted","Data":"abee74acca3dc2887201e9d143e3224eb8c1b835255667cbcffc23bdfb41d1f9"} Jan 27 09:29:26 crc kubenswrapper[4985]: I0127 09:29:26.242178 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" podStartSLOduration=1.819450529 podStartE2EDuration="2.242161008s" podCreationTimestamp="2026-01-27 09:29:24 +0000 UTC" firstStartedPulling="2026-01-27 09:29:25.35473201 +0000 UTC m=+2149.645826851" lastFinishedPulling="2026-01-27 09:29:25.777442479 +0000 UTC m=+2150.068537330" observedRunningTime="2026-01-27 09:29:26.2360071 +0000 UTC m=+2150.527101951" watchObservedRunningTime="2026-01-27 09:29:26.242161008 +0000 UTC m=+2150.533255849" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.594721 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.597386 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.624378 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.703097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz678\" (UniqueName: \"kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.703455 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.703549 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.805537 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz678\" (UniqueName: \"kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.805646 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.805777 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.806494 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.806574 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.834388 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz678\" (UniqueName: \"kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678\") pod \"community-operators-64mns\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:37 crc kubenswrapper[4985]: I0127 09:29:37.921906 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:38 crc kubenswrapper[4985]: I0127 09:29:38.350624 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:39 crc kubenswrapper[4985]: I0127 09:29:39.358387 4985 generic.go:334] "Generic (PLEG): container finished" podID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerID="d5977f4dc77cf78f51536568b77f229315f51ae23ef6c6f3e9acd3e2d3c64df6" exitCode=0 Jan 27 09:29:39 crc kubenswrapper[4985]: I0127 09:29:39.359235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerDied","Data":"d5977f4dc77cf78f51536568b77f229315f51ae23ef6c6f3e9acd3e2d3c64df6"} Jan 27 09:29:39 crc kubenswrapper[4985]: I0127 09:29:39.359274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerStarted","Data":"58210027dd4afe8e3d723c8af16c50288b6d01ffd792dfe8e5ecdfa08a7de4d4"} Jan 27 09:29:40 crc kubenswrapper[4985]: I0127 09:29:40.372667 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerStarted","Data":"04ab9928180d1ad16ff4828d20985f83643d76943c87a3b8a49b662fdf03b5db"} Jan 27 09:29:41 crc kubenswrapper[4985]: I0127 09:29:41.384285 4985 generic.go:334] "Generic (PLEG): container finished" podID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerID="04ab9928180d1ad16ff4828d20985f83643d76943c87a3b8a49b662fdf03b5db" exitCode=0 Jan 27 09:29:41 crc kubenswrapper[4985]: I0127 09:29:41.384390 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerDied","Data":"04ab9928180d1ad16ff4828d20985f83643d76943c87a3b8a49b662fdf03b5db"} Jan 27 09:29:41 crc kubenswrapper[4985]: I0127 09:29:41.828264 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:29:41 crc kubenswrapper[4985]: I0127 09:29:41.828649 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:29:42 crc kubenswrapper[4985]: I0127 09:29:42.401356 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerStarted","Data":"01092ab187f42cc0763da24317abb1bcd407922d1f7ff52124ca457d46f3e1b1"} Jan 27 09:29:42 crc kubenswrapper[4985]: I0127 09:29:42.421057 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-64mns" podStartSLOduration=2.888749503 podStartE2EDuration="5.42104286s" podCreationTimestamp="2026-01-27 09:29:37 +0000 UTC" firstStartedPulling="2026-01-27 09:29:39.360928215 +0000 UTC m=+2163.652023056" lastFinishedPulling="2026-01-27 09:29:41.893221542 +0000 UTC m=+2166.184316413" observedRunningTime="2026-01-27 09:29:42.420061173 +0000 UTC m=+2166.711156014" watchObservedRunningTime="2026-01-27 09:29:42.42104286 +0000 UTC m=+2166.712137701" Jan 27 09:29:47 crc kubenswrapper[4985]: I0127 09:29:47.922810 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:47 crc kubenswrapper[4985]: I0127 09:29:47.923444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:47 crc kubenswrapper[4985]: I0127 09:29:47.975245 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:48 crc kubenswrapper[4985]: I0127 09:29:48.514032 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:48 crc kubenswrapper[4985]: I0127 09:29:48.567637 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:50 crc kubenswrapper[4985]: I0127 09:29:50.469470 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-64mns" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="registry-server" containerID="cri-o://01092ab187f42cc0763da24317abb1bcd407922d1f7ff52124ca457d46f3e1b1" gracePeriod=2 Jan 27 09:29:51 crc kubenswrapper[4985]: I0127 09:29:51.483131 4985 generic.go:334] "Generic (PLEG): container finished" podID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerID="01092ab187f42cc0763da24317abb1bcd407922d1f7ff52124ca457d46f3e1b1" exitCode=0 Jan 27 09:29:51 crc kubenswrapper[4985]: I0127 09:29:51.483472 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerDied","Data":"01092ab187f42cc0763da24317abb1bcd407922d1f7ff52124ca457d46f3e1b1"} Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.235310 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.357707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content\") pod \"14a7808a-d2e1-4d97-8f09-18e65c93d913\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.357775 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz678\" (UniqueName: \"kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678\") pod \"14a7808a-d2e1-4d97-8f09-18e65c93d913\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.357814 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities\") pod \"14a7808a-d2e1-4d97-8f09-18e65c93d913\" (UID: \"14a7808a-d2e1-4d97-8f09-18e65c93d913\") " Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.359303 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities" (OuterVolumeSpecName: "utilities") pod "14a7808a-d2e1-4d97-8f09-18e65c93d913" (UID: "14a7808a-d2e1-4d97-8f09-18e65c93d913"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.373166 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678" (OuterVolumeSpecName: "kube-api-access-vz678") pod "14a7808a-d2e1-4d97-8f09-18e65c93d913" (UID: "14a7808a-d2e1-4d97-8f09-18e65c93d913"). InnerVolumeSpecName "kube-api-access-vz678". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.419791 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14a7808a-d2e1-4d97-8f09-18e65c93d913" (UID: "14a7808a-d2e1-4d97-8f09-18e65c93d913"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.460627 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.460658 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz678\" (UniqueName: \"kubernetes.io/projected/14a7808a-d2e1-4d97-8f09-18e65c93d913-kube-api-access-vz678\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.460670 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a7808a-d2e1-4d97-8f09-18e65c93d913-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.497536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64mns" event={"ID":"14a7808a-d2e1-4d97-8f09-18e65c93d913","Type":"ContainerDied","Data":"58210027dd4afe8e3d723c8af16c50288b6d01ffd792dfe8e5ecdfa08a7de4d4"} Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.497593 4985 scope.go:117] "RemoveContainer" containerID="01092ab187f42cc0763da24317abb1bcd407922d1f7ff52124ca457d46f3e1b1" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.497595 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64mns" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.535879 4985 scope.go:117] "RemoveContainer" containerID="04ab9928180d1ad16ff4828d20985f83643d76943c87a3b8a49b662fdf03b5db" Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.542586 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.557233 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-64mns"] Jan 27 09:29:52 crc kubenswrapper[4985]: I0127 09:29:52.576588 4985 scope.go:117] "RemoveContainer" containerID="d5977f4dc77cf78f51536568b77f229315f51ae23ef6c6f3e9acd3e2d3c64df6" Jan 27 09:29:54 crc kubenswrapper[4985]: I0127 09:29:54.471667 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" path="/var/lib/kubelet/pods/14a7808a-d2e1-4d97-8f09-18e65c93d913/volumes" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.156119 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b"] Jan 27 09:30:00 crc kubenswrapper[4985]: E0127 09:30:00.157421 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="extract-utilities" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.157444 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="extract-utilities" Jan 27 09:30:00 crc kubenswrapper[4985]: E0127 09:30:00.157473 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="extract-content" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.157484 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="extract-content" Jan 27 09:30:00 crc kubenswrapper[4985]: E0127 09:30:00.157560 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.157574 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.157897 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a7808a-d2e1-4d97-8f09-18e65c93d913" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.158935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.161617 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.162345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.169557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b"] Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.320084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.320145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pb9x\" (UniqueName: \"kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.320310 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.422051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.422111 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pb9x\" (UniqueName: \"kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.422151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.423255 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.428247 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.439934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pb9x\" (UniqueName: \"kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x\") pod \"collect-profiles-29491770-mt56b\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.494480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:00 crc kubenswrapper[4985]: I0127 09:30:00.957415 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b"] Jan 27 09:30:01 crc kubenswrapper[4985]: I0127 09:30:01.598786 4985 generic.go:334] "Generic (PLEG): container finished" podID="f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" containerID="40eb5808b51dbce31481abf5f6ce35f723b4dc09834dad2ab7d78204afa5527f" exitCode=0 Jan 27 09:30:01 crc kubenswrapper[4985]: I0127 09:30:01.598845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" event={"ID":"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba","Type":"ContainerDied","Data":"40eb5808b51dbce31481abf5f6ce35f723b4dc09834dad2ab7d78204afa5527f"} Jan 27 09:30:01 crc kubenswrapper[4985]: I0127 09:30:01.600322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" event={"ID":"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba","Type":"ContainerStarted","Data":"41f035af47c8ffbb7ecc3647e5d3ead94c8d0363bb412a97db7ced77ffc33437"} Jan 27 09:30:02 crc kubenswrapper[4985]: I0127 09:30:02.951787 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.075986 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume\") pod \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.076199 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume\") pod \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.076239 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pb9x\" (UniqueName: \"kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x\") pod \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\" (UID: \"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba\") " Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.077572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" (UID: "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.081846 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" (UID: "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.082965 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x" (OuterVolumeSpecName: "kube-api-access-6pb9x") pod "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" (UID: "f7b1a6f4-cd83-4799-90ed-9ca60f8499ba"). InnerVolumeSpecName "kube-api-access-6pb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.178150 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.178424 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pb9x\" (UniqueName: \"kubernetes.io/projected/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-kube-api-access-6pb9x\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.178494 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b1a6f4-cd83-4799-90ed-9ca60f8499ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.620391 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" event={"ID":"f7b1a6f4-cd83-4799-90ed-9ca60f8499ba","Type":"ContainerDied","Data":"41f035af47c8ffbb7ecc3647e5d3ead94c8d0363bb412a97db7ced77ffc33437"} Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.620441 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f035af47c8ffbb7ecc3647e5d3ead94c8d0363bb412a97db7ced77ffc33437" Jan 27 09:30:03 crc kubenswrapper[4985]: I0127 09:30:03.620448 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-mt56b" Jan 27 09:30:04 crc kubenswrapper[4985]: I0127 09:30:04.042987 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz"] Jan 27 09:30:04 crc kubenswrapper[4985]: I0127 09:30:04.066197 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-mxclz"] Jan 27 09:30:04 crc kubenswrapper[4985]: I0127 09:30:04.465579 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0036b0c-985c-4832-a8a8-0a18b5cc3a52" path="/var/lib/kubelet/pods/a0036b0c-985c-4832-a8a8-0a18b5cc3a52/volumes" Jan 27 09:30:11 crc kubenswrapper[4985]: I0127 09:30:11.827843 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:30:11 crc kubenswrapper[4985]: I0127 09:30:11.828449 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:30:11 crc kubenswrapper[4985]: I0127 09:30:11.828504 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:30:11 crc kubenswrapper[4985]: I0127 09:30:11.829419 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:30:11 crc kubenswrapper[4985]: I0127 09:30:11.829498 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154" gracePeriod=600 Jan 27 09:30:12 crc kubenswrapper[4985]: I0127 09:30:12.742248 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154" exitCode=0 Jan 27 09:30:12 crc kubenswrapper[4985]: I0127 09:30:12.742305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154"} Jan 27 09:30:12 crc kubenswrapper[4985]: I0127 09:30:12.742970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33"} Jan 27 09:30:12 crc kubenswrapper[4985]: I0127 09:30:12.743001 4985 scope.go:117] "RemoveContainer" containerID="7ba28a0f021e3d5eea7e45a3d43bec2dd0d775a22e805e1915a3667c64bf2c54" Jan 27 09:30:45 crc kubenswrapper[4985]: I0127 09:30:45.039305 4985 scope.go:117] "RemoveContainer" containerID="354b410a86f88c8585bd149eab4ae0c378499ebcdc0845d0f2a588ba9beaca34" Jan 27 09:32:41 crc kubenswrapper[4985]: I0127 09:32:41.828125 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:32:41 crc kubenswrapper[4985]: I0127 09:32:41.828939 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:33:11 crc kubenswrapper[4985]: I0127 09:33:11.830934 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:33:11 crc kubenswrapper[4985]: I0127 09:33:11.831680 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:33:41 crc kubenswrapper[4985]: I0127 09:33:41.828213 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:33:41 crc kubenswrapper[4985]: I0127 09:33:41.828732 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:33:41 crc kubenswrapper[4985]: I0127 09:33:41.828780 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:33:41 crc kubenswrapper[4985]: I0127 09:33:41.829549 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:33:41 crc kubenswrapper[4985]: I0127 09:33:41.829601 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" gracePeriod=600 Jan 27 09:33:41 crc kubenswrapper[4985]: E0127 09:33:41.959102 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:33:42 crc kubenswrapper[4985]: I0127 09:33:42.919387 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" exitCode=0 Jan 27 09:33:42 crc kubenswrapper[4985]: I0127 09:33:42.919472 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33"} Jan 27 09:33:42 crc kubenswrapper[4985]: I0127 09:33:42.919558 4985 scope.go:117] "RemoveContainer" containerID="3ddb16a4ed947c28dbf96965e7947dcb2881d19af04260b6359e84277a6f1154" Jan 27 09:33:42 crc kubenswrapper[4985]: I0127 09:33:42.920794 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:33:42 crc kubenswrapper[4985]: E0127 09:33:42.924605 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:33:50 crc kubenswrapper[4985]: I0127 09:33:50.997847 4985 generic.go:334] "Generic (PLEG): container finished" podID="b45b73b0-334f-456f-9a7e-be4337f5a0d1" containerID="37864be275227dc244ff6c8a10160876d353eae9185b13863c241498d7795602" exitCode=0 Jan 27 09:33:51 crc kubenswrapper[4985]: I0127 09:33:50.997938 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" event={"ID":"b45b73b0-334f-456f-9a7e-be4337f5a0d1","Type":"ContainerDied","Data":"37864be275227dc244ff6c8a10160876d353eae9185b13863c241498d7795602"} Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.581301 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.642113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle\") pod \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.642305 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory\") pod \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.642408 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xdm7\" (UniqueName: \"kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7\") pod \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.642432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0\") pod \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.642478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam\") pod \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\" (UID: \"b45b73b0-334f-456f-9a7e-be4337f5a0d1\") " Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.664957 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7" (OuterVolumeSpecName: "kube-api-access-8xdm7") pod "b45b73b0-334f-456f-9a7e-be4337f5a0d1" (UID: "b45b73b0-334f-456f-9a7e-be4337f5a0d1"). InnerVolumeSpecName "kube-api-access-8xdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.664992 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b45b73b0-334f-456f-9a7e-be4337f5a0d1" (UID: "b45b73b0-334f-456f-9a7e-be4337f5a0d1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.674763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b45b73b0-334f-456f-9a7e-be4337f5a0d1" (UID: "b45b73b0-334f-456f-9a7e-be4337f5a0d1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.686060 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b45b73b0-334f-456f-9a7e-be4337f5a0d1" (UID: "b45b73b0-334f-456f-9a7e-be4337f5a0d1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.701436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory" (OuterVolumeSpecName: "inventory") pod "b45b73b0-334f-456f-9a7e-be4337f5a0d1" (UID: "b45b73b0-334f-456f-9a7e-be4337f5a0d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.744630 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xdm7\" (UniqueName: \"kubernetes.io/projected/b45b73b0-334f-456f-9a7e-be4337f5a0d1-kube-api-access-8xdm7\") on node \"crc\" DevicePath \"\"" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.744668 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.744683 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.744696 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:33:52 crc kubenswrapper[4985]: I0127 09:33:52.744710 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45b73b0-334f-456f-9a7e-be4337f5a0d1-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.056585 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" event={"ID":"b45b73b0-334f-456f-9a7e-be4337f5a0d1","Type":"ContainerDied","Data":"abee74acca3dc2887201e9d143e3224eb8c1b835255667cbcffc23bdfb41d1f9"} Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.056647 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abee74acca3dc2887201e9d143e3224eb8c1b835255667cbcffc23bdfb41d1f9" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.056737 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.154598 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g"] Jan 27 09:33:53 crc kubenswrapper[4985]: E0127 09:33:53.155176 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45b73b0-334f-456f-9a7e-be4337f5a0d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.155201 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45b73b0-334f-456f-9a7e-be4337f5a0d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 09:33:53 crc kubenswrapper[4985]: E0127 09:33:53.155248 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" containerName="collect-profiles" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.155255 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" containerName="collect-profiles" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.155455 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b1a6f4-cd83-4799-90ed-9ca60f8499ba" containerName="collect-profiles" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.155477 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45b73b0-334f-456f-9a7e-be4337f5a0d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.156499 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.161746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.161746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.162095 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.162276 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.162467 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.162606 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.171266 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.171650 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g"] Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.258538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.258874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259150 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259227 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259454 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259499 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jqg\" (UniqueName: \"kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.259562 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.362191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.362266 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jqg\" (UniqueName: \"kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.362306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.362411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.363669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.363763 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.364043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.364120 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.364211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.364287 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.367420 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.368344 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.368825 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.370584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.371405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.371834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.372735 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.384540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jqg\" (UniqueName: \"kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-svz7g\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:53 crc kubenswrapper[4985]: I0127 09:33:53.494056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:33:54 crc kubenswrapper[4985]: I0127 09:33:54.184896 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g"] Jan 27 09:33:54 crc kubenswrapper[4985]: I0127 09:33:54.196580 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:33:55 crc kubenswrapper[4985]: I0127 09:33:55.081255 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" event={"ID":"882132ec-1950-4476-bbad-f8f2acf0e117","Type":"ContainerStarted","Data":"1b4f5d7e0f1d6dbb84b0793458e6af975da01e800396711fac5740f13eff2d8d"} Jan 27 09:33:55 crc kubenswrapper[4985]: I0127 09:33:55.081867 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" event={"ID":"882132ec-1950-4476-bbad-f8f2acf0e117","Type":"ContainerStarted","Data":"8ec1b1fa65744016208c14f4ad23e430cae5664fa015a09210e9b4ef5bb91561"} Jan 27 09:33:55 crc kubenswrapper[4985]: I0127 09:33:55.109344 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" podStartSLOduration=1.565757684 podStartE2EDuration="2.109317688s" podCreationTimestamp="2026-01-27 09:33:53 +0000 UTC" firstStartedPulling="2026-01-27 09:33:54.196267343 +0000 UTC m=+2418.487362184" lastFinishedPulling="2026-01-27 09:33:54.739827347 +0000 UTC m=+2419.030922188" observedRunningTime="2026-01-27 09:33:55.102215133 +0000 UTC m=+2419.393310004" watchObservedRunningTime="2026-01-27 09:33:55.109317688 +0000 UTC m=+2419.400412529" Jan 27 09:33:56 crc kubenswrapper[4985]: I0127 09:33:56.458337 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:33:56 crc kubenswrapper[4985]: E0127 09:33:56.459210 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:34:08 crc kubenswrapper[4985]: I0127 09:34:08.452689 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:34:08 crc kubenswrapper[4985]: E0127 09:34:08.453805 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:34:20 crc kubenswrapper[4985]: I0127 09:34:20.453747 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:34:20 crc kubenswrapper[4985]: E0127 09:34:20.454552 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:34:32 crc kubenswrapper[4985]: I0127 09:34:32.452663 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:34:32 crc kubenswrapper[4985]: E0127 09:34:32.453283 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:34:45 crc kubenswrapper[4985]: I0127 09:34:45.452164 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:34:45 crc kubenswrapper[4985]: E0127 09:34:45.453118 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:34:56 crc kubenswrapper[4985]: I0127 09:34:56.459755 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:34:56 crc kubenswrapper[4985]: E0127 09:34:56.460579 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:35:09 crc kubenswrapper[4985]: I0127 09:35:09.452480 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:35:09 crc kubenswrapper[4985]: E0127 09:35:09.453186 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:35:22 crc kubenswrapper[4985]: I0127 09:35:22.452072 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:35:22 crc kubenswrapper[4985]: E0127 09:35:22.454055 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:35:34 crc kubenswrapper[4985]: I0127 09:35:34.451627 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:35:34 crc kubenswrapper[4985]: E0127 09:35:34.452658 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:35:49 crc kubenswrapper[4985]: I0127 09:35:49.452775 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:35:49 crc kubenswrapper[4985]: E0127 09:35:49.453846 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:03 crc kubenswrapper[4985]: I0127 09:36:03.452918 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:36:03 crc kubenswrapper[4985]: E0127 09:36:03.453805 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:16 crc kubenswrapper[4985]: I0127 09:36:16.461271 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:36:16 crc kubenswrapper[4985]: E0127 09:36:16.462651 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:30 crc kubenswrapper[4985]: I0127 09:36:30.451509 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:36:30 crc kubenswrapper[4985]: E0127 09:36:30.453646 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:35 crc kubenswrapper[4985]: I0127 09:36:35.299604 4985 generic.go:334] "Generic (PLEG): container finished" podID="882132ec-1950-4476-bbad-f8f2acf0e117" containerID="1b4f5d7e0f1d6dbb84b0793458e6af975da01e800396711fac5740f13eff2d8d" exitCode=0 Jan 27 09:36:35 crc kubenswrapper[4985]: I0127 09:36:35.299678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" event={"ID":"882132ec-1950-4476-bbad-f8f2acf0e117","Type":"ContainerDied","Data":"1b4f5d7e0f1d6dbb84b0793458e6af975da01e800396711fac5740f13eff2d8d"} Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.769020 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.879286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.879719 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.879904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9jqg\" (UniqueName: \"kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880323 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880418 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.880536 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle\") pod \"882132ec-1950-4476-bbad-f8f2acf0e117\" (UID: \"882132ec-1950-4476-bbad-f8f2acf0e117\") " Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.885423 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg" (OuterVolumeSpecName: "kube-api-access-k9jqg") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "kube-api-access-k9jqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.888249 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.910816 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.911217 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.911467 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.918859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory" (OuterVolumeSpecName: "inventory") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.921469 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.922464 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.954946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "882132ec-1950-4476-bbad-f8f2acf0e117" (UID: "882132ec-1950-4476-bbad-f8f2acf0e117"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982731 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9jqg\" (UniqueName: \"kubernetes.io/projected/882132ec-1950-4476-bbad-f8f2acf0e117-kube-api-access-k9jqg\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982768 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982780 4985 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/882132ec-1950-4476-bbad-f8f2acf0e117-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982789 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982801 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982810 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982819 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982827 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:36 crc kubenswrapper[4985]: I0127 09:36:36.982836 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/882132ec-1950-4476-bbad-f8f2acf0e117-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.323484 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" event={"ID":"882132ec-1950-4476-bbad-f8f2acf0e117","Type":"ContainerDied","Data":"8ec1b1fa65744016208c14f4ad23e430cae5664fa015a09210e9b4ef5bb91561"} Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.323537 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec1b1fa65744016208c14f4ad23e430cae5664fa015a09210e9b4ef5bb91561" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.323591 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-svz7g" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.432619 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv"] Jan 27 09:36:37 crc kubenswrapper[4985]: E0127 09:36:37.433089 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882132ec-1950-4476-bbad-f8f2acf0e117" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.433114 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="882132ec-1950-4476-bbad-f8f2acf0e117" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.433498 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="882132ec-1950-4476-bbad-f8f2acf0e117" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.434151 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.438220 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.438226 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.438625 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.438757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s87fp" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.442640 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.447774 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv"] Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.491455 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.491952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.492015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.492064 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.492121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.492240 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.492277 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9bm\" (UniqueName: \"kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.594048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.594094 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9bm\" (UniqueName: \"kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.594176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.594980 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.595006 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.595028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.595048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.598010 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.598667 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.599844 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.600017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.600647 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.602493 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.614608 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9bm\" (UniqueName: \"kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cncpv\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:37 crc kubenswrapper[4985]: I0127 09:36:37.766640 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:36:38 crc kubenswrapper[4985]: I0127 09:36:38.748886 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv"] Jan 27 09:36:39 crc kubenswrapper[4985]: I0127 09:36:39.448876 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" event={"ID":"e1648956-4ef8-425b-afd7-573f09da0342","Type":"ContainerStarted","Data":"a514df933ff30d0d0405230d35545d4dda3ddbe663e73c0ad89a24f101b110b4"} Jan 27 09:36:40 crc kubenswrapper[4985]: I0127 09:36:40.476589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" event={"ID":"e1648956-4ef8-425b-afd7-573f09da0342","Type":"ContainerStarted","Data":"7bff8df54d23a26f988827a4a196f3f71a1b3297b9bf09c4a30163e4441a6e94"} Jan 27 09:36:40 crc kubenswrapper[4985]: I0127 09:36:40.489446 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" podStartSLOduration=2.886625627 podStartE2EDuration="3.489427003s" podCreationTimestamp="2026-01-27 09:36:37 +0000 UTC" firstStartedPulling="2026-01-27 09:36:38.762619888 +0000 UTC m=+2583.053714729" lastFinishedPulling="2026-01-27 09:36:39.365421264 +0000 UTC m=+2583.656516105" observedRunningTime="2026-01-27 09:36:40.483657205 +0000 UTC m=+2584.774752066" watchObservedRunningTime="2026-01-27 09:36:40.489427003 +0000 UTC m=+2584.780521844" Jan 27 09:36:44 crc kubenswrapper[4985]: I0127 09:36:44.452620 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:36:44 crc kubenswrapper[4985]: E0127 09:36:44.453281 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.750235 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.752983 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.774276 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.784378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.784705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhwd\" (UniqueName: \"kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.784905 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.886493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhwd\" (UniqueName: \"kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.886808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.886838 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.887327 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.889767 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:46 crc kubenswrapper[4985]: I0127 09:36:46.916375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhwd\" (UniqueName: \"kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd\") pod \"certified-operators-k4shj\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:47 crc kubenswrapper[4985]: I0127 09:36:47.097639 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:47 crc kubenswrapper[4985]: I0127 09:36:47.591452 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:36:48 crc kubenswrapper[4985]: I0127 09:36:48.541221 4985 generic.go:334] "Generic (PLEG): container finished" podID="b73fb7ad-d563-44ee-9131-c331163e464c" containerID="2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d" exitCode=0 Jan 27 09:36:48 crc kubenswrapper[4985]: I0127 09:36:48.541630 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerDied","Data":"2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d"} Jan 27 09:36:48 crc kubenswrapper[4985]: I0127 09:36:48.545032 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerStarted","Data":"5809e2b342f81de899eb8a63b7dcab7abbce2391d9637c724afddb1d5a2e706a"} Jan 27 09:36:49 crc kubenswrapper[4985]: I0127 09:36:49.554889 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerStarted","Data":"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda"} Jan 27 09:36:50 crc kubenswrapper[4985]: I0127 09:36:50.568483 4985 generic.go:334] "Generic (PLEG): container finished" podID="b73fb7ad-d563-44ee-9131-c331163e464c" containerID="88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda" exitCode=0 Jan 27 09:36:50 crc kubenswrapper[4985]: I0127 09:36:50.568567 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerDied","Data":"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda"} Jan 27 09:36:51 crc kubenswrapper[4985]: I0127 09:36:51.584325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerStarted","Data":"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79"} Jan 27 09:36:51 crc kubenswrapper[4985]: I0127 09:36:51.608928 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k4shj" podStartSLOduration=3.19681638 podStartE2EDuration="5.608907321s" podCreationTimestamp="2026-01-27 09:36:46 +0000 UTC" firstStartedPulling="2026-01-27 09:36:48.543191178 +0000 UTC m=+2592.834286019" lastFinishedPulling="2026-01-27 09:36:50.955282109 +0000 UTC m=+2595.246376960" observedRunningTime="2026-01-27 09:36:51.607700979 +0000 UTC m=+2595.898795850" watchObservedRunningTime="2026-01-27 09:36:51.608907321 +0000 UTC m=+2595.900002162" Jan 27 09:36:57 crc kubenswrapper[4985]: I0127 09:36:57.098055 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:57 crc kubenswrapper[4985]: I0127 09:36:57.098479 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:57 crc kubenswrapper[4985]: I0127 09:36:57.142034 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:57 crc kubenswrapper[4985]: I0127 09:36:57.689464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:36:57 crc kubenswrapper[4985]: I0127 09:36:57.759614 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.451863 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:36:59 crc kubenswrapper[4985]: E0127 09:36:59.452146 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.648486 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k4shj" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="registry-server" containerID="cri-o://e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79" gracePeriod=2 Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.792092 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.795046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.805315 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.866199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.866299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.866610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2gq\" (UniqueName: \"kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.970172 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2gq\" (UniqueName: \"kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.970234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.970268 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.970833 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:36:59 crc kubenswrapper[4985]: I0127 09:36:59.971050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.024798 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2gq\" (UniqueName: \"kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq\") pod \"redhat-operators-jppfd\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.089654 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.127641 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.171879 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities\") pod \"b73fb7ad-d563-44ee-9131-c331163e464c\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.171928 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxhwd\" (UniqueName: \"kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd\") pod \"b73fb7ad-d563-44ee-9131-c331163e464c\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.172054 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content\") pod \"b73fb7ad-d563-44ee-9131-c331163e464c\" (UID: \"b73fb7ad-d563-44ee-9131-c331163e464c\") " Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.174867 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities" (OuterVolumeSpecName: "utilities") pod "b73fb7ad-d563-44ee-9131-c331163e464c" (UID: "b73fb7ad-d563-44ee-9131-c331163e464c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.179284 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd" (OuterVolumeSpecName: "kube-api-access-jxhwd") pod "b73fb7ad-d563-44ee-9131-c331163e464c" (UID: "b73fb7ad-d563-44ee-9131-c331163e464c"). InnerVolumeSpecName "kube-api-access-jxhwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.245302 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b73fb7ad-d563-44ee-9131-c331163e464c" (UID: "b73fb7ad-d563-44ee-9131-c331163e464c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.274115 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.274143 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73fb7ad-d563-44ee-9131-c331163e464c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.274175 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxhwd\" (UniqueName: \"kubernetes.io/projected/b73fb7ad-d563-44ee-9131-c331163e464c-kube-api-access-jxhwd\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.395360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:37:00 crc kubenswrapper[4985]: W0127 09:37:00.408647 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a90bca_629e_478e_a2a9_732f49445640.slice/crio-09c3a898df04129a234f44a49cab1956066174e8d2a2eb5ab48fbff866733ceb WatchSource:0}: Error finding container 09c3a898df04129a234f44a49cab1956066174e8d2a2eb5ab48fbff866733ceb: Status 404 returned error can't find the container with id 09c3a898df04129a234f44a49cab1956066174e8d2a2eb5ab48fbff866733ceb Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.658097 4985 generic.go:334] "Generic (PLEG): container finished" podID="55a90bca-629e-478e-a2a9-732f49445640" containerID="c86e61468ea1cfd63d1798be9e380f0044a4de694a1e8ab14498993f55592acd" exitCode=0 Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.658152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerDied","Data":"c86e61468ea1cfd63d1798be9e380f0044a4de694a1e8ab14498993f55592acd"} Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.658208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerStarted","Data":"09c3a898df04129a234f44a49cab1956066174e8d2a2eb5ab48fbff866733ceb"} Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.662549 4985 generic.go:334] "Generic (PLEG): container finished" podID="b73fb7ad-d563-44ee-9131-c331163e464c" containerID="e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79" exitCode=0 Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.662580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerDied","Data":"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79"} Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.662598 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4shj" event={"ID":"b73fb7ad-d563-44ee-9131-c331163e464c","Type":"ContainerDied","Data":"5809e2b342f81de899eb8a63b7dcab7abbce2391d9637c724afddb1d5a2e706a"} Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.662614 4985 scope.go:117] "RemoveContainer" containerID="e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.662654 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4shj" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.685828 4985 scope.go:117] "RemoveContainer" containerID="88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.702755 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.724236 4985 scope.go:117] "RemoveContainer" containerID="2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.729831 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k4shj"] Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.760761 4985 scope.go:117] "RemoveContainer" containerID="e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79" Jan 27 09:37:00 crc kubenswrapper[4985]: E0127 09:37:00.762116 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79\": container with ID starting with e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79 not found: ID does not exist" containerID="e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.762158 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79"} err="failed to get container status \"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79\": rpc error: code = NotFound desc = could not find container \"e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79\": container with ID starting with e09c6aa0dca49fa49f625997c4f84b5355bf0319f138abf5736d5c25665a7c79 not found: ID does not exist" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.762181 4985 scope.go:117] "RemoveContainer" containerID="88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda" Jan 27 09:37:00 crc kubenswrapper[4985]: E0127 09:37:00.764995 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda\": container with ID starting with 88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda not found: ID does not exist" containerID="88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.765047 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda"} err="failed to get container status \"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda\": rpc error: code = NotFound desc = could not find container \"88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda\": container with ID starting with 88528ff10785359f9396b76b90d9114d90b3e52016a99aeb228111c135d59bda not found: ID does not exist" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.765070 4985 scope.go:117] "RemoveContainer" containerID="2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d" Jan 27 09:37:00 crc kubenswrapper[4985]: E0127 09:37:00.765596 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d\": container with ID starting with 2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d not found: ID does not exist" containerID="2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d" Jan 27 09:37:00 crc kubenswrapper[4985]: I0127 09:37:00.765651 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d"} err="failed to get container status \"2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d\": rpc error: code = NotFound desc = could not find container \"2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d\": container with ID starting with 2dda3762b31153fa33152d0a4cc7b41f22ecb89e0337ca05a7cd417337620c9d not found: ID does not exist" Jan 27 09:37:01 crc kubenswrapper[4985]: I0127 09:37:01.676236 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerStarted","Data":"667d64ec8e1da17eab74618b0a2903930b2ef6be7640653db29ec7866628c4e5"} Jan 27 09:37:02 crc kubenswrapper[4985]: I0127 09:37:02.473769 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" path="/var/lib/kubelet/pods/b73fb7ad-d563-44ee-9131-c331163e464c/volumes" Jan 27 09:37:02 crc kubenswrapper[4985]: I0127 09:37:02.695452 4985 generic.go:334] "Generic (PLEG): container finished" podID="55a90bca-629e-478e-a2a9-732f49445640" containerID="667d64ec8e1da17eab74618b0a2903930b2ef6be7640653db29ec7866628c4e5" exitCode=0 Jan 27 09:37:02 crc kubenswrapper[4985]: I0127 09:37:02.695560 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerDied","Data":"667d64ec8e1da17eab74618b0a2903930b2ef6be7640653db29ec7866628c4e5"} Jan 27 09:37:03 crc kubenswrapper[4985]: I0127 09:37:03.708678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerStarted","Data":"46df8c97caaebb8330785091d945a8d6be288633be1ca4e60e1a05c9a0a07f74"} Jan 27 09:37:03 crc kubenswrapper[4985]: I0127 09:37:03.737633 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jppfd" podStartSLOduration=2.110820734 podStartE2EDuration="4.737607668s" podCreationTimestamp="2026-01-27 09:36:59 +0000 UTC" firstStartedPulling="2026-01-27 09:37:00.661489032 +0000 UTC m=+2604.952583873" lastFinishedPulling="2026-01-27 09:37:03.288275966 +0000 UTC m=+2607.579370807" observedRunningTime="2026-01-27 09:37:03.733824174 +0000 UTC m=+2608.024919015" watchObservedRunningTime="2026-01-27 09:37:03.737607668 +0000 UTC m=+2608.028702509" Jan 27 09:37:10 crc kubenswrapper[4985]: I0127 09:37:10.129747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:10 crc kubenswrapper[4985]: I0127 09:37:10.130834 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:10 crc kubenswrapper[4985]: I0127 09:37:10.203832 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:10 crc kubenswrapper[4985]: I0127 09:37:10.827892 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:10 crc kubenswrapper[4985]: I0127 09:37:10.881798 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:37:12 crc kubenswrapper[4985]: I0127 09:37:12.452840 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:37:12 crc kubenswrapper[4985]: E0127 09:37:12.453545 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:37:12 crc kubenswrapper[4985]: I0127 09:37:12.804095 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jppfd" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="registry-server" containerID="cri-o://46df8c97caaebb8330785091d945a8d6be288633be1ca4e60e1a05c9a0a07f74" gracePeriod=2 Jan 27 09:37:13 crc kubenswrapper[4985]: I0127 09:37:13.820440 4985 generic.go:334] "Generic (PLEG): container finished" podID="55a90bca-629e-478e-a2a9-732f49445640" containerID="46df8c97caaebb8330785091d945a8d6be288633be1ca4e60e1a05c9a0a07f74" exitCode=0 Jan 27 09:37:13 crc kubenswrapper[4985]: I0127 09:37:13.820563 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerDied","Data":"46df8c97caaebb8330785091d945a8d6be288633be1ca4e60e1a05c9a0a07f74"} Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.392195 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.488906 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities\") pod \"55a90bca-629e-478e-a2a9-732f49445640\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.489217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content\") pod \"55a90bca-629e-478e-a2a9-732f49445640\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.489344 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2gq\" (UniqueName: \"kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq\") pod \"55a90bca-629e-478e-a2a9-732f49445640\" (UID: \"55a90bca-629e-478e-a2a9-732f49445640\") " Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.491007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities" (OuterVolumeSpecName: "utilities") pod "55a90bca-629e-478e-a2a9-732f49445640" (UID: "55a90bca-629e-478e-a2a9-732f49445640"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.495470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq" (OuterVolumeSpecName: "kube-api-access-ws2gq") pod "55a90bca-629e-478e-a2a9-732f49445640" (UID: "55a90bca-629e-478e-a2a9-732f49445640"). InnerVolumeSpecName "kube-api-access-ws2gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.593000 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2gq\" (UniqueName: \"kubernetes.io/projected/55a90bca-629e-478e-a2a9-732f49445640-kube-api-access-ws2gq\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.593041 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.641606 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55a90bca-629e-478e-a2a9-732f49445640" (UID: "55a90bca-629e-478e-a2a9-732f49445640"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.694439 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a90bca-629e-478e-a2a9-732f49445640-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.833075 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jppfd" event={"ID":"55a90bca-629e-478e-a2a9-732f49445640","Type":"ContainerDied","Data":"09c3a898df04129a234f44a49cab1956066174e8d2a2eb5ab48fbff866733ceb"} Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.833536 4985 scope.go:117] "RemoveContainer" containerID="46df8c97caaebb8330785091d945a8d6be288633be1ca4e60e1a05c9a0a07f74" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.833149 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jppfd" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.879866 4985 scope.go:117] "RemoveContainer" containerID="667d64ec8e1da17eab74618b0a2903930b2ef6be7640653db29ec7866628c4e5" Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.903637 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.918849 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jppfd"] Jan 27 09:37:14 crc kubenswrapper[4985]: I0127 09:37:14.929340 4985 scope.go:117] "RemoveContainer" containerID="c86e61468ea1cfd63d1798be9e380f0044a4de694a1e8ab14498993f55592acd" Jan 27 09:37:16 crc kubenswrapper[4985]: I0127 09:37:16.470868 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a90bca-629e-478e-a2a9-732f49445640" path="/var/lib/kubelet/pods/55a90bca-629e-478e-a2a9-732f49445640/volumes" Jan 27 09:37:27 crc kubenswrapper[4985]: I0127 09:37:27.452756 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:37:27 crc kubenswrapper[4985]: E0127 09:37:27.453984 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:37:38 crc kubenswrapper[4985]: I0127 09:37:38.452914 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:37:38 crc kubenswrapper[4985]: E0127 09:37:38.454007 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:37:50 crc kubenswrapper[4985]: I0127 09:37:50.452728 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:37:50 crc kubenswrapper[4985]: E0127 09:37:50.454042 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:38:02 crc kubenswrapper[4985]: I0127 09:38:02.452260 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:38:02 crc kubenswrapper[4985]: E0127 09:38:02.453131 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:38:17 crc kubenswrapper[4985]: I0127 09:38:17.452089 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:38:17 crc kubenswrapper[4985]: E0127 09:38:17.452777 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:38:29 crc kubenswrapper[4985]: I0127 09:38:29.452033 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:38:29 crc kubenswrapper[4985]: E0127 09:38:29.453086 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:38:43 crc kubenswrapper[4985]: I0127 09:38:43.452317 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:38:44 crc kubenswrapper[4985]: I0127 09:38:44.193784 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca"} Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.743180 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.744985 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745006 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.745019 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="extract-content" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745028 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="extract-content" Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.745103 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745115 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.745134 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="extract-content" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745143 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="extract-content" Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.745157 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="extract-utilities" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745166 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="extract-utilities" Jan 27 09:39:05 crc kubenswrapper[4985]: E0127 09:39:05.745191 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="extract-utilities" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745199 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="extract-utilities" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745922 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a90bca-629e-478e-a2a9-732f49445640" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.745941 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73fb7ad-d563-44ee-9131-c331163e464c" containerName="registry-server" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.750480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.759933 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.779425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbzx\" (UniqueName: \"kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.779487 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.779505 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.880483 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbzx\" (UniqueName: \"kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.880561 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.880582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.881156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.881217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:05 crc kubenswrapper[4985]: I0127 09:39:05.906146 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbzx\" (UniqueName: \"kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx\") pod \"redhat-marketplace-h2czw\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:06 crc kubenswrapper[4985]: I0127 09:39:06.090867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:06 crc kubenswrapper[4985]: I0127 09:39:06.655872 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:07 crc kubenswrapper[4985]: I0127 09:39:07.414768 4985 generic.go:334] "Generic (PLEG): container finished" podID="9fd51127-2829-4c68-aa59-12281188a245" containerID="d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b" exitCode=0 Jan 27 09:39:07 crc kubenswrapper[4985]: I0127 09:39:07.414885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerDied","Data":"d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b"} Jan 27 09:39:07 crc kubenswrapper[4985]: I0127 09:39:07.415111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerStarted","Data":"07ab99a68f808d9612760faae98b50ab13ebd69108a26db60b50af5fedd1b0ce"} Jan 27 09:39:07 crc kubenswrapper[4985]: I0127 09:39:07.418503 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:39:08 crc kubenswrapper[4985]: I0127 09:39:08.430424 4985 generic.go:334] "Generic (PLEG): container finished" podID="9fd51127-2829-4c68-aa59-12281188a245" containerID="b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00" exitCode=0 Jan 27 09:39:08 crc kubenswrapper[4985]: I0127 09:39:08.430559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerDied","Data":"b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00"} Jan 27 09:39:09 crc kubenswrapper[4985]: I0127 09:39:09.443694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerStarted","Data":"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85"} Jan 27 09:39:09 crc kubenswrapper[4985]: I0127 09:39:09.479276 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2czw" podStartSLOduration=3.066058341 podStartE2EDuration="4.479248755s" podCreationTimestamp="2026-01-27 09:39:05 +0000 UTC" firstStartedPulling="2026-01-27 09:39:07.418269298 +0000 UTC m=+2731.709364139" lastFinishedPulling="2026-01-27 09:39:08.831459672 +0000 UTC m=+2733.122554553" observedRunningTime="2026-01-27 09:39:09.469958142 +0000 UTC m=+2733.761053003" watchObservedRunningTime="2026-01-27 09:39:09.479248755 +0000 UTC m=+2733.770343596" Jan 27 09:39:16 crc kubenswrapper[4985]: I0127 09:39:16.091702 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:16 crc kubenswrapper[4985]: I0127 09:39:16.094685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:16 crc kubenswrapper[4985]: I0127 09:39:16.149696 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:16 crc kubenswrapper[4985]: I0127 09:39:16.579375 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:16 crc kubenswrapper[4985]: I0127 09:39:16.647194 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:18 crc kubenswrapper[4985]: I0127 09:39:18.533229 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2czw" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="registry-server" containerID="cri-o://c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85" gracePeriod=2 Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.092136 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.184921 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content\") pod \"9fd51127-2829-4c68-aa59-12281188a245\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.185064 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfbzx\" (UniqueName: \"kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx\") pod \"9fd51127-2829-4c68-aa59-12281188a245\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.185390 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities\") pod \"9fd51127-2829-4c68-aa59-12281188a245\" (UID: \"9fd51127-2829-4c68-aa59-12281188a245\") " Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.186579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities" (OuterVolumeSpecName: "utilities") pod "9fd51127-2829-4c68-aa59-12281188a245" (UID: "9fd51127-2829-4c68-aa59-12281188a245"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.194666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx" (OuterVolumeSpecName: "kube-api-access-sfbzx") pod "9fd51127-2829-4c68-aa59-12281188a245" (UID: "9fd51127-2829-4c68-aa59-12281188a245"). InnerVolumeSpecName "kube-api-access-sfbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.235472 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fd51127-2829-4c68-aa59-12281188a245" (UID: "9fd51127-2829-4c68-aa59-12281188a245"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.287484 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfbzx\" (UniqueName: \"kubernetes.io/projected/9fd51127-2829-4c68-aa59-12281188a245-kube-api-access-sfbzx\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.287532 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.287541 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd51127-2829-4c68-aa59-12281188a245-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.544216 4985 generic.go:334] "Generic (PLEG): container finished" podID="9fd51127-2829-4c68-aa59-12281188a245" containerID="c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85" exitCode=0 Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.544278 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerDied","Data":"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85"} Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.544292 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2czw" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.544319 4985 scope.go:117] "RemoveContainer" containerID="c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.544309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2czw" event={"ID":"9fd51127-2829-4c68-aa59-12281188a245","Type":"ContainerDied","Data":"07ab99a68f808d9612760faae98b50ab13ebd69108a26db60b50af5fedd1b0ce"} Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.578461 4985 scope.go:117] "RemoveContainer" containerID="b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.592269 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.604062 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2czw"] Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.616204 4985 scope.go:117] "RemoveContainer" containerID="d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.684294 4985 scope.go:117] "RemoveContainer" containerID="c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85" Jan 27 09:39:19 crc kubenswrapper[4985]: E0127 09:39:19.684929 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85\": container with ID starting with c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85 not found: ID does not exist" containerID="c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.684986 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85"} err="failed to get container status \"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85\": rpc error: code = NotFound desc = could not find container \"c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85\": container with ID starting with c639bbcc21de1bb207e76808b2cf4bb3780dbd7ddefd353d3df61d4cb0a76d85 not found: ID does not exist" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.685020 4985 scope.go:117] "RemoveContainer" containerID="b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00" Jan 27 09:39:19 crc kubenswrapper[4985]: E0127 09:39:19.685291 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00\": container with ID starting with b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00 not found: ID does not exist" containerID="b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.685339 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00"} err="failed to get container status \"b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00\": rpc error: code = NotFound desc = could not find container \"b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00\": container with ID starting with b1e93fe010668fa921225c58b184facdbdc723a2467e419f238f136e174cee00 not found: ID does not exist" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.685359 4985 scope.go:117] "RemoveContainer" containerID="d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b" Jan 27 09:39:19 crc kubenswrapper[4985]: E0127 09:39:19.685731 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b\": container with ID starting with d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b not found: ID does not exist" containerID="d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b" Jan 27 09:39:19 crc kubenswrapper[4985]: I0127 09:39:19.685754 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b"} err="failed to get container status \"d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b\": rpc error: code = NotFound desc = could not find container \"d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b\": container with ID starting with d991376bdcce2fa6111dc2a3ed3586acc1867e757bbfcfa0cc34b5741cef767b not found: ID does not exist" Jan 27 09:39:20 crc kubenswrapper[4985]: I0127 09:39:20.470632 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd51127-2829-4c68-aa59-12281188a245" path="/var/lib/kubelet/pods/9fd51127-2829-4c68-aa59-12281188a245/volumes" Jan 27 09:39:36 crc kubenswrapper[4985]: I0127 09:39:36.734103 4985 generic.go:334] "Generic (PLEG): container finished" podID="e1648956-4ef8-425b-afd7-573f09da0342" containerID="7bff8df54d23a26f988827a4a196f3f71a1b3297b9bf09c4a30163e4441a6e94" exitCode=0 Jan 27 09:39:36 crc kubenswrapper[4985]: I0127 09:39:36.734186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" event={"ID":"e1648956-4ef8-425b-afd7-573f09da0342","Type":"ContainerDied","Data":"7bff8df54d23a26f988827a4a196f3f71a1b3297b9bf09c4a30163e4441a6e94"} Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.368099 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.507249 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.507858 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.507930 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9bm\" (UniqueName: \"kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.507971 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.508011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.508103 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.508756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2\") pod \"e1648956-4ef8-425b-afd7-573f09da0342\" (UID: \"e1648956-4ef8-425b-afd7-573f09da0342\") " Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.517996 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.518100 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm" (OuterVolumeSpecName: "kube-api-access-pb9bm") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "kube-api-access-pb9bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.560977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.562265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.565259 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.576787 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory" (OuterVolumeSpecName: "inventory") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.579237 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e1648956-4ef8-425b-afd7-573f09da0342" (UID: "e1648956-4ef8-425b-afd7-573f09da0342"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611201 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611260 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611270 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9bm\" (UniqueName: \"kubernetes.io/projected/e1648956-4ef8-425b-afd7-573f09da0342-kube-api-access-pb9bm\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611281 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611291 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611301 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.611310 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e1648956-4ef8-425b-afd7-573f09da0342-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.761020 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" event={"ID":"e1648956-4ef8-425b-afd7-573f09da0342","Type":"ContainerDied","Data":"a514df933ff30d0d0405230d35545d4dda3ddbe663e73c0ad89a24f101b110b4"} Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.761079 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a514df933ff30d0d0405230d35545d4dda3ddbe663e73c0ad89a24f101b110b4" Jan 27 09:39:38 crc kubenswrapper[4985]: I0127 09:39:38.761159 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cncpv" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.451429 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 09:40:25 crc kubenswrapper[4985]: E0127 09:40:25.453127 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="extract-content" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453152 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="extract-content" Jan 27 09:40:25 crc kubenswrapper[4985]: E0127 09:40:25.453179 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="extract-utilities" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453191 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="extract-utilities" Jan 27 09:40:25 crc kubenswrapper[4985]: E0127 09:40:25.453227 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="registry-server" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453239 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="registry-server" Jan 27 09:40:25 crc kubenswrapper[4985]: E0127 09:40:25.453254 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1648956-4ef8-425b-afd7-573f09da0342" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453267 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1648956-4ef8-425b-afd7-573f09da0342" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453623 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1648956-4ef8-425b-afd7-573f09da0342" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.453653 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd51127-2829-4c68-aa59-12281188a245" containerName="registry-server" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.454621 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.457401 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.457459 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68xp6" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.457729 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.457761 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.467723 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.557685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.557769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.557797 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.557850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.557974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.558000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.558078 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.558146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.558227 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk27r\" (UniqueName: \"kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.659900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660001 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660064 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660103 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660152 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk27r\" (UniqueName: \"kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660184 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660260 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660776 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.660951 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.661644 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.661720 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.672303 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.672635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.673404 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.681533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk27r\" (UniqueName: \"kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.693953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " pod="openstack/tempest-tests-tempest" Jan 27 09:40:25 crc kubenswrapper[4985]: I0127 09:40:25.814583 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 09:40:26 crc kubenswrapper[4985]: I0127 09:40:26.421601 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 09:40:27 crc kubenswrapper[4985]: I0127 09:40:27.224601 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b1f74192-c081-4846-b517-32d8d4c8245f","Type":"ContainerStarted","Data":"da87333a28c3d5bc5f35a8eeb7467025b584690ea22f21dea6d576a5d6f85cfa"} Jan 27 09:40:53 crc kubenswrapper[4985]: E0127 09:40:53.870065 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 09:40:53 crc kubenswrapper[4985]: E0127 09:40:53.871355 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk27r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b1f74192-c081-4846-b517-32d8d4c8245f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 09:40:53 crc kubenswrapper[4985]: E0127 09:40:53.872710 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b1f74192-c081-4846-b517-32d8d4c8245f" Jan 27 09:40:54 crc kubenswrapper[4985]: E0127 09:40:54.503381 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b1f74192-c081-4846-b517-32d8d4c8245f" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.631242 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.634030 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.647760 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.813008 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.813060 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.813209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmsp\" (UniqueName: \"kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.915208 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vmsp\" (UniqueName: \"kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.915620 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.915744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.916215 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.916290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.931537 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 09:41:08 crc kubenswrapper[4985]: I0127 09:41:08.945332 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vmsp\" (UniqueName: \"kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp\") pod \"community-operators-l9lp6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:09 crc kubenswrapper[4985]: I0127 09:41:09.003662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:09 crc kubenswrapper[4985]: I0127 09:41:09.584977 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:09 crc kubenswrapper[4985]: I0127 09:41:09.698710 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerStarted","Data":"d205505e7658ef33ba267903f741c4725e0295a00289430a79fdfc7f5134709a"} Jan 27 09:41:10 crc kubenswrapper[4985]: I0127 09:41:10.708349 4985 generic.go:334] "Generic (PLEG): container finished" podID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerID="72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2" exitCode=0 Jan 27 09:41:10 crc kubenswrapper[4985]: I0127 09:41:10.708549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerDied","Data":"72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2"} Jan 27 09:41:10 crc kubenswrapper[4985]: I0127 09:41:10.720141 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b1f74192-c081-4846-b517-32d8d4c8245f","Type":"ContainerStarted","Data":"0f246927f1bf4d1434bba056b1bd39bc0463292d55f915c0691bdcbd757b4591"} Jan 27 09:41:10 crc kubenswrapper[4985]: I0127 09:41:10.768463 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.2742469419999995 podStartE2EDuration="46.768445463s" podCreationTimestamp="2026-01-27 09:40:24 +0000 UTC" firstStartedPulling="2026-01-27 09:40:26.434179359 +0000 UTC m=+2810.725274200" lastFinishedPulling="2026-01-27 09:41:08.92837787 +0000 UTC m=+2853.219472721" observedRunningTime="2026-01-27 09:41:10.761673888 +0000 UTC m=+2855.052768819" watchObservedRunningTime="2026-01-27 09:41:10.768445463 +0000 UTC m=+2855.059540304" Jan 27 09:41:11 crc kubenswrapper[4985]: I0127 09:41:11.731882 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerStarted","Data":"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190"} Jan 27 09:41:11 crc kubenswrapper[4985]: I0127 09:41:11.827970 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:41:11 crc kubenswrapper[4985]: I0127 09:41:11.828031 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:41:12 crc kubenswrapper[4985]: I0127 09:41:12.750601 4985 generic.go:334] "Generic (PLEG): container finished" podID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerID="bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190" exitCode=0 Jan 27 09:41:12 crc kubenswrapper[4985]: I0127 09:41:12.750729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerDied","Data":"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190"} Jan 27 09:41:13 crc kubenswrapper[4985]: I0127 09:41:13.766156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerStarted","Data":"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981"} Jan 27 09:41:13 crc kubenswrapper[4985]: I0127 09:41:13.798418 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l9lp6" podStartSLOduration=3.370670254 podStartE2EDuration="5.798390702s" podCreationTimestamp="2026-01-27 09:41:08 +0000 UTC" firstStartedPulling="2026-01-27 09:41:10.717222696 +0000 UTC m=+2855.008317577" lastFinishedPulling="2026-01-27 09:41:13.144943174 +0000 UTC m=+2857.436038025" observedRunningTime="2026-01-27 09:41:13.797189909 +0000 UTC m=+2858.088284760" watchObservedRunningTime="2026-01-27 09:41:13.798390702 +0000 UTC m=+2858.089485573" Jan 27 09:41:14 crc kubenswrapper[4985]: I0127 09:41:14.782022 4985 generic.go:334] "Generic (PLEG): container finished" podID="b1f74192-c081-4846-b517-32d8d4c8245f" containerID="0f246927f1bf4d1434bba056b1bd39bc0463292d55f915c0691bdcbd757b4591" exitCode=123 Jan 27 09:41:14 crc kubenswrapper[4985]: I0127 09:41:14.782098 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b1f74192-c081-4846-b517-32d8d4c8245f","Type":"ContainerDied","Data":"0f246927f1bf4d1434bba056b1bd39bc0463292d55f915c0691bdcbd757b4591"} Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.260295 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396121 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396196 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396260 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396598 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396641 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396655 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396679 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk27r\" (UniqueName: \"kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.396807 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b1f74192-c081-4846-b517-32d8d4c8245f\" (UID: \"b1f74192-c081-4846-b517-32d8d4c8245f\") " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.397496 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.398181 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data" (OuterVolumeSpecName: "config-data") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.398429 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.401892 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.402035 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r" (OuterVolumeSpecName: "kube-api-access-mk27r") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "kube-api-access-mk27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.422445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.429376 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.429743 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.457855 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b1f74192-c081-4846-b517-32d8d4c8245f" (UID: "b1f74192-c081-4846-b517-32d8d4c8245f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499548 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499585 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499598 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499611 4985 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b1f74192-c081-4846-b517-32d8d4c8245f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499623 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1f74192-c081-4846-b517-32d8d4c8245f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499635 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b1f74192-c081-4846-b517-32d8d4c8245f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499649 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk27r\" (UniqueName: \"kubernetes.io/projected/b1f74192-c081-4846-b517-32d8d4c8245f-kube-api-access-mk27r\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.499688 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.529688 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.600853 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.805359 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b1f74192-c081-4846-b517-32d8d4c8245f","Type":"ContainerDied","Data":"da87333a28c3d5bc5f35a8eeb7467025b584690ea22f21dea6d576a5d6f85cfa"} Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.805405 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da87333a28c3d5bc5f35a8eeb7467025b584690ea22f21dea6d576a5d6f85cfa" Jan 27 09:41:16 crc kubenswrapper[4985]: I0127 09:41:16.805414 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 09:41:19 crc kubenswrapper[4985]: I0127 09:41:19.004407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:19 crc kubenswrapper[4985]: I0127 09:41:19.004921 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:19 crc kubenswrapper[4985]: I0127 09:41:19.083161 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:19 crc kubenswrapper[4985]: I0127 09:41:19.922785 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.005363 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.243068 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 09:41:21 crc kubenswrapper[4985]: E0127 09:41:21.243661 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f74192-c081-4846-b517-32d8d4c8245f" containerName="tempest-tests-tempest-tests-runner" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.243686 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f74192-c081-4846-b517-32d8d4c8245f" containerName="tempest-tests-tempest-tests-runner" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.243945 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f74192-c081-4846-b517-32d8d4c8245f" containerName="tempest-tests-tempest-tests-runner" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.244740 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.247403 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68xp6" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.270214 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.407317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.407410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpjp\" (UniqueName: \"kubernetes.io/projected/a91bdac9-513f-4cb9-b009-1b2bb6253902-kube-api-access-khpjp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.510337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.510417 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpjp\" (UniqueName: \"kubernetes.io/projected/a91bdac9-513f-4cb9-b009-1b2bb6253902-kube-api-access-khpjp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.510837 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.541477 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpjp\" (UniqueName: \"kubernetes.io/projected/a91bdac9-513f-4cb9-b009-1b2bb6253902-kube-api-access-khpjp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.541673 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a91bdac9-513f-4cb9-b009-1b2bb6253902\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.579010 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 09:41:21 crc kubenswrapper[4985]: I0127 09:41:21.858902 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l9lp6" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="registry-server" containerID="cri-o://c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981" gracePeriod=2 Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.062747 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.272925 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.430583 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content\") pod \"08971f8d-a467-4917-b6e6-835dec8eaac6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.430731 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vmsp\" (UniqueName: \"kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp\") pod \"08971f8d-a467-4917-b6e6-835dec8eaac6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.430780 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities\") pod \"08971f8d-a467-4917-b6e6-835dec8eaac6\" (UID: \"08971f8d-a467-4917-b6e6-835dec8eaac6\") " Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.432346 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities" (OuterVolumeSpecName: "utilities") pod "08971f8d-a467-4917-b6e6-835dec8eaac6" (UID: "08971f8d-a467-4917-b6e6-835dec8eaac6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.437822 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp" (OuterVolumeSpecName: "kube-api-access-4vmsp") pod "08971f8d-a467-4917-b6e6-835dec8eaac6" (UID: "08971f8d-a467-4917-b6e6-835dec8eaac6"). InnerVolumeSpecName "kube-api-access-4vmsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.479710 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08971f8d-a467-4917-b6e6-835dec8eaac6" (UID: "08971f8d-a467-4917-b6e6-835dec8eaac6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.532821 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vmsp\" (UniqueName: \"kubernetes.io/projected/08971f8d-a467-4917-b6e6-835dec8eaac6-kube-api-access-4vmsp\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.532848 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.532859 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08971f8d-a467-4917-b6e6-835dec8eaac6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.870578 4985 generic.go:334] "Generic (PLEG): container finished" podID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerID="c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981" exitCode=0 Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.870644 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9lp6" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.870665 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerDied","Data":"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981"} Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.871068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9lp6" event={"ID":"08971f8d-a467-4917-b6e6-835dec8eaac6","Type":"ContainerDied","Data":"d205505e7658ef33ba267903f741c4725e0295a00289430a79fdfc7f5134709a"} Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.871072 4985 scope.go:117] "RemoveContainer" containerID="c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981" Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.872192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a91bdac9-513f-4cb9-b009-1b2bb6253902","Type":"ContainerStarted","Data":"49a9e4d8c1f5713dfcd0f8fee98b7029ed4b4cc36b3ea5772a88227959ba9835"} Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.911936 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:22 crc kubenswrapper[4985]: I0127 09:41:22.919054 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l9lp6"] Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.015261 4985 scope.go:117] "RemoveContainer" containerID="bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.052496 4985 scope.go:117] "RemoveContainer" containerID="72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.087207 4985 scope.go:117] "RemoveContainer" containerID="c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981" Jan 27 09:41:23 crc kubenswrapper[4985]: E0127 09:41:23.087684 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981\": container with ID starting with c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981 not found: ID does not exist" containerID="c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.087736 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981"} err="failed to get container status \"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981\": rpc error: code = NotFound desc = could not find container \"c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981\": container with ID starting with c200b41989a54d99a81b3a9b979c442dfc4d816be3e7a4062a15fb9ebf3f1981 not found: ID does not exist" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.087770 4985 scope.go:117] "RemoveContainer" containerID="bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190" Jan 27 09:41:23 crc kubenswrapper[4985]: E0127 09:41:23.088109 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190\": container with ID starting with bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190 not found: ID does not exist" containerID="bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.088140 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190"} err="failed to get container status \"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190\": rpc error: code = NotFound desc = could not find container \"bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190\": container with ID starting with bf6bae375c6fc240ecb72a1ca574024d61789c9d1e7166c2ae6c56016a119190 not found: ID does not exist" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.088162 4985 scope.go:117] "RemoveContainer" containerID="72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2" Jan 27 09:41:23 crc kubenswrapper[4985]: E0127 09:41:23.088337 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2\": container with ID starting with 72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2 not found: ID does not exist" containerID="72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.088357 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2"} err="failed to get container status \"72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2\": rpc error: code = NotFound desc = could not find container \"72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2\": container with ID starting with 72f275352b152387dac32999604b392da7b2b931d5f2db3753034af8025ff8f2 not found: ID does not exist" Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.893094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a91bdac9-513f-4cb9-b009-1b2bb6253902","Type":"ContainerStarted","Data":"473e3e0485bdb586214bcafaa22b0425b2ba0902eababd66038748f0abd2f45f"} Jan 27 09:41:23 crc kubenswrapper[4985]: I0127 09:41:23.926470 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.959588342 podStartE2EDuration="2.926439376s" podCreationTimestamp="2026-01-27 09:41:21 +0000 UTC" firstStartedPulling="2026-01-27 09:41:22.085956111 +0000 UTC m=+2866.377050962" lastFinishedPulling="2026-01-27 09:41:23.052807155 +0000 UTC m=+2867.343901996" observedRunningTime="2026-01-27 09:41:23.913660198 +0000 UTC m=+2868.204755079" watchObservedRunningTime="2026-01-27 09:41:23.926439376 +0000 UTC m=+2868.217534257" Jan 27 09:41:24 crc kubenswrapper[4985]: I0127 09:41:24.471832 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" path="/var/lib/kubelet/pods/08971f8d-a467-4917-b6e6-835dec8eaac6/volumes" Jan 27 09:41:41 crc kubenswrapper[4985]: I0127 09:41:41.828486 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:41:41 crc kubenswrapper[4985]: I0127 09:41:41.829032 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.925283 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5fxzf/must-gather-krks8"] Jan 27 09:41:51 crc kubenswrapper[4985]: E0127 09:41:51.926601 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="extract-utilities" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.926627 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="extract-utilities" Jan 27 09:41:51 crc kubenswrapper[4985]: E0127 09:41:51.926672 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="extract-content" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.926686 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="extract-content" Jan 27 09:41:51 crc kubenswrapper[4985]: E0127 09:41:51.926712 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="registry-server" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.926725 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="registry-server" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.927018 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="08971f8d-a467-4917-b6e6-835dec8eaac6" containerName="registry-server" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.930250 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.941838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5fxzf"/"openshift-service-ca.crt" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.951304 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5fxzf"/"kube-root-ca.crt" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.953446 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5fxzf/must-gather-krks8"] Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.954483 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:51 crc kubenswrapper[4985]: I0127 09:41:51.954599 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl44b\" (UniqueName: \"kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.056810 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl44b\" (UniqueName: \"kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.056943 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.057371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.079744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl44b\" (UniqueName: \"kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b\") pod \"must-gather-krks8\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.256169 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:41:52 crc kubenswrapper[4985]: I0127 09:41:52.881911 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5fxzf/must-gather-krks8"] Jan 27 09:41:53 crc kubenswrapper[4985]: I0127 09:41:53.216824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/must-gather-krks8" event={"ID":"d344afb8-eaf7-4dad-aea2-894652e26583","Type":"ContainerStarted","Data":"1104672a19f901076abe6fc89cbd231847f28ce7e47f121700e3b5c5061f409d"} Jan 27 09:42:01 crc kubenswrapper[4985]: I0127 09:42:01.281845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/must-gather-krks8" event={"ID":"d344afb8-eaf7-4dad-aea2-894652e26583","Type":"ContainerStarted","Data":"2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd"} Jan 27 09:42:01 crc kubenswrapper[4985]: I0127 09:42:01.282448 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/must-gather-krks8" event={"ID":"d344afb8-eaf7-4dad-aea2-894652e26583","Type":"ContainerStarted","Data":"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5"} Jan 27 09:42:01 crc kubenswrapper[4985]: I0127 09:42:01.305236 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5fxzf/must-gather-krks8" podStartSLOduration=2.491864894 podStartE2EDuration="10.305213483s" podCreationTimestamp="2026-01-27 09:41:51 +0000 UTC" firstStartedPulling="2026-01-27 09:41:52.887755642 +0000 UTC m=+2897.178850483" lastFinishedPulling="2026-01-27 09:42:00.701104231 +0000 UTC m=+2904.992199072" observedRunningTime="2026-01-27 09:42:01.295570811 +0000 UTC m=+2905.586665672" watchObservedRunningTime="2026-01-27 09:42:01.305213483 +0000 UTC m=+2905.596308324" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.671727 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-phvff"] Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.675290 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.677026 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5fxzf"/"default-dockercfg-2xbc8" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.823769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.824439 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvbc\" (UniqueName: \"kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.925714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.925781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcvbc\" (UniqueName: \"kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.925903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.949791 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcvbc\" (UniqueName: \"kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc\") pod \"crc-debug-phvff\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:04 crc kubenswrapper[4985]: I0127 09:42:04.999081 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:05 crc kubenswrapper[4985]: I0127 09:42:05.318815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/crc-debug-phvff" event={"ID":"b0ba28cf-5e62-448e-b97b-88a6ac9638a2","Type":"ContainerStarted","Data":"1e543920d54a98fb88a62d554858b74cf0610ad368f9c147e1d887bc15ff6cbc"} Jan 27 09:42:11 crc kubenswrapper[4985]: I0127 09:42:11.828255 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:42:11 crc kubenswrapper[4985]: I0127 09:42:11.828849 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:42:11 crc kubenswrapper[4985]: I0127 09:42:11.828895 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:42:11 crc kubenswrapper[4985]: I0127 09:42:11.829690 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:42:11 crc kubenswrapper[4985]: I0127 09:42:11.829743 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca" gracePeriod=600 Jan 27 09:42:12 crc kubenswrapper[4985]: I0127 09:42:12.394284 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca" exitCode=0 Jan 27 09:42:12 crc kubenswrapper[4985]: I0127 09:42:12.394451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca"} Jan 27 09:42:12 crc kubenswrapper[4985]: I0127 09:42:12.394736 4985 scope.go:117] "RemoveContainer" containerID="4c19ffbb95917b101eb869fb60d90d110bbc1e093534505aa6a36ea876935d33" Jan 27 09:42:17 crc kubenswrapper[4985]: I0127 09:42:17.449205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/crc-debug-phvff" event={"ID":"b0ba28cf-5e62-448e-b97b-88a6ac9638a2","Type":"ContainerStarted","Data":"e34d07e81bff961a5d3264c7bf0d22b778e106a1f5c4049070a5c522f60e8bef"} Jan 27 09:42:17 crc kubenswrapper[4985]: I0127 09:42:17.451567 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468"} Jan 27 09:42:17 crc kubenswrapper[4985]: I0127 09:42:17.476980 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5fxzf/crc-debug-phvff" podStartSLOduration=1.512535587 podStartE2EDuration="13.476957345s" podCreationTimestamp="2026-01-27 09:42:04 +0000 UTC" firstStartedPulling="2026-01-27 09:42:05.040586758 +0000 UTC m=+2909.331681599" lastFinishedPulling="2026-01-27 09:42:17.005008516 +0000 UTC m=+2921.296103357" observedRunningTime="2026-01-27 09:42:17.467458816 +0000 UTC m=+2921.758553657" watchObservedRunningTime="2026-01-27 09:42:17.476957345 +0000 UTC m=+2921.768052186" Jan 27 09:42:36 crc kubenswrapper[4985]: I0127 09:42:36.607941 4985 generic.go:334] "Generic (PLEG): container finished" podID="b0ba28cf-5e62-448e-b97b-88a6ac9638a2" containerID="e34d07e81bff961a5d3264c7bf0d22b778e106a1f5c4049070a5c522f60e8bef" exitCode=0 Jan 27 09:42:36 crc kubenswrapper[4985]: I0127 09:42:36.608001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/crc-debug-phvff" event={"ID":"b0ba28cf-5e62-448e-b97b-88a6ac9638a2","Type":"ContainerDied","Data":"e34d07e81bff961a5d3264c7bf0d22b778e106a1f5c4049070a5c522f60e8bef"} Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.735827 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.779725 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-phvff"] Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.790243 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-phvff"] Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.830883 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host\") pod \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.830987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcvbc\" (UniqueName: \"kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc\") pod \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\" (UID: \"b0ba28cf-5e62-448e-b97b-88a6ac9638a2\") " Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.831931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host" (OuterVolumeSpecName: "host") pod "b0ba28cf-5e62-448e-b97b-88a6ac9638a2" (UID: "b0ba28cf-5e62-448e-b97b-88a6ac9638a2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.837866 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc" (OuterVolumeSpecName: "kube-api-access-bcvbc") pod "b0ba28cf-5e62-448e-b97b-88a6ac9638a2" (UID: "b0ba28cf-5e62-448e-b97b-88a6ac9638a2"). InnerVolumeSpecName "kube-api-access-bcvbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.933636 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcvbc\" (UniqueName: \"kubernetes.io/projected/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-kube-api-access-bcvbc\") on node \"crc\" DevicePath \"\"" Jan 27 09:42:37 crc kubenswrapper[4985]: I0127 09:42:37.933665 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0ba28cf-5e62-448e-b97b-88a6ac9638a2-host\") on node \"crc\" DevicePath \"\"" Jan 27 09:42:38 crc kubenswrapper[4985]: I0127 09:42:38.474990 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ba28cf-5e62-448e-b97b-88a6ac9638a2" path="/var/lib/kubelet/pods/b0ba28cf-5e62-448e-b97b-88a6ac9638a2/volumes" Jan 27 09:42:38 crc kubenswrapper[4985]: I0127 09:42:38.631657 4985 scope.go:117] "RemoveContainer" containerID="e34d07e81bff961a5d3264c7bf0d22b778e106a1f5c4049070a5c522f60e8bef" Jan 27 09:42:38 crc kubenswrapper[4985]: I0127 09:42:38.632205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-phvff" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.037795 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-xm9px"] Jan 27 09:42:39 crc kubenswrapper[4985]: E0127 09:42:39.038215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ba28cf-5e62-448e-b97b-88a6ac9638a2" containerName="container-00" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.038229 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ba28cf-5e62-448e-b97b-88a6ac9638a2" containerName="container-00" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.038411 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ba28cf-5e62-448e-b97b-88a6ac9638a2" containerName="container-00" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.038963 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.041495 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5fxzf"/"default-dockercfg-2xbc8" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.157259 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.157737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6g6b\" (UniqueName: \"kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.259280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.259407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6g6b\" (UniqueName: \"kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.259445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.292787 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6g6b\" (UniqueName: \"kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b\") pod \"crc-debug-xm9px\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.369581 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:39 crc kubenswrapper[4985]: I0127 09:42:39.640788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" event={"ID":"0e409031-f539-4822-89f5-9f4246671be9","Type":"ContainerStarted","Data":"dd74b693195bf69d3d15398c6044f92612e6383aa5a970bc7aa1b75c72482554"} Jan 27 09:42:40 crc kubenswrapper[4985]: I0127 09:42:40.650637 4985 generic.go:334] "Generic (PLEG): container finished" podID="0e409031-f539-4822-89f5-9f4246671be9" containerID="ed419d69678c972a3c9db0f83b1688a2dcd0e543a1ceb1c904728b31a9bc6019" exitCode=1 Jan 27 09:42:40 crc kubenswrapper[4985]: I0127 09:42:40.650680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" event={"ID":"0e409031-f539-4822-89f5-9f4246671be9","Type":"ContainerDied","Data":"ed419d69678c972a3c9db0f83b1688a2dcd0e543a1ceb1c904728b31a9bc6019"} Jan 27 09:42:40 crc kubenswrapper[4985]: I0127 09:42:40.684500 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-xm9px"] Jan 27 09:42:40 crc kubenswrapper[4985]: I0127 09:42:40.696216 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5fxzf/crc-debug-xm9px"] Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.755990 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.915824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6g6b\" (UniqueName: \"kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b\") pod \"0e409031-f539-4822-89f5-9f4246671be9\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.915897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host\") pod \"0e409031-f539-4822-89f5-9f4246671be9\" (UID: \"0e409031-f539-4822-89f5-9f4246671be9\") " Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.916056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host" (OuterVolumeSpecName: "host") pod "0e409031-f539-4822-89f5-9f4246671be9" (UID: "0e409031-f539-4822-89f5-9f4246671be9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.916838 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e409031-f539-4822-89f5-9f4246671be9-host\") on node \"crc\" DevicePath \"\"" Jan 27 09:42:41 crc kubenswrapper[4985]: I0127 09:42:41.922102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b" (OuterVolumeSpecName: "kube-api-access-b6g6b") pod "0e409031-f539-4822-89f5-9f4246671be9" (UID: "0e409031-f539-4822-89f5-9f4246671be9"). InnerVolumeSpecName "kube-api-access-b6g6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:42:42 crc kubenswrapper[4985]: I0127 09:42:42.019089 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6g6b\" (UniqueName: \"kubernetes.io/projected/0e409031-f539-4822-89f5-9f4246671be9-kube-api-access-b6g6b\") on node \"crc\" DevicePath \"\"" Jan 27 09:42:42 crc kubenswrapper[4985]: I0127 09:42:42.466282 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e409031-f539-4822-89f5-9f4246671be9" path="/var/lib/kubelet/pods/0e409031-f539-4822-89f5-9f4246671be9/volumes" Jan 27 09:42:42 crc kubenswrapper[4985]: I0127 09:42:42.668429 4985 scope.go:117] "RemoveContainer" containerID="ed419d69678c972a3c9db0f83b1688a2dcd0e543a1ceb1c904728b31a9bc6019" Jan 27 09:42:42 crc kubenswrapper[4985]: I0127 09:42:42.668448 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/crc-debug-xm9px" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.498600 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f4789ffd8-wmhds_d5892977-6d3e-49f7-91fe-fcdff05221b0/barbican-api/0.log" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.680173 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f4789ffd8-wmhds_d5892977-6d3e-49f7-91fe-fcdff05221b0/barbican-api-log/0.log" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.771159 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bbcb69c84-cb6bz_75790805-0e26-4dc9-9970-dd8b6a332ce7/barbican-keystone-listener/0.log" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.845270 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bbcb69c84-cb6bz_75790805-0e26-4dc9-9970-dd8b6a332ce7/barbican-keystone-listener-log/0.log" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.973125 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7957b79d47-2xvr4_73388136-0e40-4439-95bb-52ef16391821/barbican-worker/0.log" Jan 27 09:43:21 crc kubenswrapper[4985]: I0127 09:43:21.991253 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7957b79d47-2xvr4_73388136-0e40-4439-95bb-52ef16391821/barbican-worker-log/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.330080 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ebebddf7-8341-4e17-a156-e251351db2fa/ceilometer-central-agent/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.359823 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xswt5_ca890687-26ad-46f0-9ca5-8c245c6f4b22/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.489367 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ebebddf7-8341-4e17-a156-e251351db2fa/ceilometer-notification-agent/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.499487 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ebebddf7-8341-4e17-a156-e251351db2fa/proxy-httpd/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.535551 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ebebddf7-8341-4e17-a156-e251351db2fa/sg-core/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.676587 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b8bda802-a296-4c69-b1ee-07a238912c81/cinder-api/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.776009 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b8bda802-a296-4c69-b1ee-07a238912c81/cinder-api-log/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.889432 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1c1ff98e-211c-421d-9fcc-3357afdf8639/cinder-scheduler/0.log" Jan 27 09:43:22 crc kubenswrapper[4985]: I0127 09:43:22.949388 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1c1ff98e-211c-421d-9fcc-3357afdf8639/probe/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.013365 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pcjj9_47b0f7e9-81ac-4e4d-bfe5-d0b663cbf763/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.208287 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-j5xq8_d810fad1-a264-46e4-9094-a86b77cec3c3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.316628 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-wkvsf_643859d4-9ed0-4fc7-9a10-01d0b0e19410/init/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.484295 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-wkvsf_643859d4-9ed0-4fc7-9a10-01d0b0e19410/init/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.532952 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-wkvsf_643859d4-9ed0-4fc7-9a10-01d0b0e19410/dnsmasq-dns/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.575832 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-w8bd6_a4781598-eb66-49c1-80c2-cf509881f0dd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.735097 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8a0474b6-bb48-4c95-8735-07917545a256/glance-httpd/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.782699 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8a0474b6-bb48-4c95-8735-07917545a256/glance-log/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.938642 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6682197b-86c6-4c58-9fa3-4b08340e9464/glance-httpd/0.log" Jan 27 09:43:23 crc kubenswrapper[4985]: I0127 09:43:23.987450 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6682197b-86c6-4c58-9fa3-4b08340e9464/glance-log/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.171344 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69b99cb974-fzls4_24f5c0ab-206b-4a03-9e4b-c94feff53f9e/horizon/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.358154 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69b99cb974-fzls4_24f5c0ab-206b-4a03-9e4b-c94feff53f9e/horizon-log/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.366227 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ct7ds_bc2fc556-0d48-4993-a66c-a48eac7a023c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.539657 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s824q_184363e4-db1e-463c-bb4e-aea7cd0c849d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.722432 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-786bc44b8-jnlsn_230e3cc0-e86e-4443-bdd7-04b53908937e/keystone-api/0.log" Jan 27 09:43:24 crc kubenswrapper[4985]: I0127 09:43:24.776108 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_14d915c5-e7d5-4925-9f52-faf1b7f03716/kube-state-metrics/0.log" Jan 27 09:43:25 crc kubenswrapper[4985]: I0127 09:43:25.217343 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-lx8b7_b45b73b0-334f-456f-9a7e-be4337f5a0d1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:25 crc kubenswrapper[4985]: I0127 09:43:25.483494 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8569774db7-5qrp6_60dc03ee-3efa-410a-8b38-f8b2eab0807a/neutron-api/0.log" Jan 27 09:43:25 crc kubenswrapper[4985]: I0127 09:43:25.580779 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8569774db7-5qrp6_60dc03ee-3efa-410a-8b38-f8b2eab0807a/neutron-httpd/0.log" Jan 27 09:43:25 crc kubenswrapper[4985]: I0127 09:43:25.698233 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nbxdt_0cb09792-906c-423f-8567-cc13f8f3d403/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.066052 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8d66059e-8e8d-44aa-bda5-abf143f9416d/nova-api-log/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.124013 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8d66059e-8e8d-44aa-bda5-abf143f9416d/nova-api-api/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.139355 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9b7800ec-036a-4b19-98de-4404f3c7fbcc/nova-cell0-conductor-conductor/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.489230 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d7109e8e-aed9-4dbf-9746-46e772bb7979/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.492912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e09de4ac-f34f-45d7-93cf-be4958284be0/nova-cell1-conductor-conductor/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.807014 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-svz7g_882132ec-1950-4476-bbad-f8f2acf0e117/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:26 crc kubenswrapper[4985]: I0127 09:43:26.867832 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fdfbd855-3465-4821-91d3-b49545447e36/nova-metadata-log/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.265302 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_04de3704-b3c6-4693-baf3-c8e68335e2ed/nova-scheduler-scheduler/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.355901 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0bcbfae-acfe-4ef3-8b04-18f21c728fd6/mysql-bootstrap/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.592339 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0bcbfae-acfe-4ef3-8b04-18f21c728fd6/mysql-bootstrap/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.656894 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0bcbfae-acfe-4ef3-8b04-18f21c728fd6/galera/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.757837 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fdfbd855-3465-4821-91d3-b49545447e36/nova-metadata-metadata/0.log" Jan 27 09:43:27 crc kubenswrapper[4985]: I0127 09:43:27.853058 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ba28b990-460d-4a2c-b9b5-73f24d9b3f9e/mysql-bootstrap/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.349103 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1a110a4f-4669-42cb-9a7a-acb80ad9c3e2/openstackclient/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.397929 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ba28b990-460d-4a2c-b9b5-73f24d9b3f9e/galera/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.400203 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ba28b990-460d-4a2c-b9b5-73f24d9b3f9e/mysql-bootstrap/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.569780 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2zjxh_d2f52eee-3926-4ed2-9058-4e159f11a6cf/ovn-controller/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.651607 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-68j75_6e9e3bc1-c1a4-417b-bec2-d4c8fc9c09fd/openstack-network-exporter/0.log" Jan 27 09:43:28 crc kubenswrapper[4985]: I0127 09:43:28.828621 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m82tc_48fb2403-f98b-4166-9470-f95e5002e52d/ovsdb-server-init/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.196271 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m82tc_48fb2403-f98b-4166-9470-f95e5002e52d/ovsdb-server/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.209795 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m82tc_48fb2403-f98b-4166-9470-f95e5002e52d/ovsdb-server-init/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.221260 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m82tc_48fb2403-f98b-4166-9470-f95e5002e52d/ovs-vswitchd/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.480338 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rmw8d_db95adac-f4d3-476c-9273-82e0991a7dd2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.527003 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49843c3e-2aeb-43e9-8041-6f212a7fcc7c/openstack-network-exporter/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.527749 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49843c3e-2aeb-43e9-8041-6f212a7fcc7c/ovn-northd/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.713207 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17e67cba-f1d0-4144-ace7-49373081babb/openstack-network-exporter/0.log" Jan 27 09:43:29 crc kubenswrapper[4985]: I0127 09:43:29.763169 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17e67cba-f1d0-4144-ace7-49373081babb/ovsdbserver-nb/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.006539 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_42f570db-f357-4a94-8895-25d887fc8d3c/openstack-network-exporter/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.038492 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_42f570db-f357-4a94-8895-25d887fc8d3c/ovsdbserver-sb/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.279467 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84f67698b-shkcs_09da2957-d13e-44db-b153-3fcbbbfeaad8/placement-api/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.316864 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84f67698b-shkcs_09da2957-d13e-44db-b153-3fcbbbfeaad8/placement-log/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.415559 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f08b3701-2ee6-4de9-8d6b-8191a8ff95d3/setup-container/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.568687 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f08b3701-2ee6-4de9-8d6b-8191a8ff95d3/setup-container/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.612180 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f08b3701-2ee6-4de9-8d6b-8191a8ff95d3/rabbitmq/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.672368 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6bf353db-18e8-4814-835d-228e9d0aaec6/setup-container/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.945081 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6bf353db-18e8-4814-835d-228e9d0aaec6/rabbitmq/0.log" Jan 27 09:43:30 crc kubenswrapper[4985]: I0127 09:43:30.978483 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6bf353db-18e8-4814-835d-228e9d0aaec6/setup-container/0.log" Jan 27 09:43:31 crc kubenswrapper[4985]: I0127 09:43:31.006054 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lpm2n_a4e0044e-7ce9-4e14-ad79-112dba0165a5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:31 crc kubenswrapper[4985]: I0127 09:43:31.452582 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zpmfd_d3c88468-ade0-4b44-8824-f0564b217b93/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:31 crc kubenswrapper[4985]: I0127 09:43:31.503025 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wxbg5_44e0b0a4-8eba-49d5-9408-2a6400e0cedf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:31 crc kubenswrapper[4985]: I0127 09:43:31.682979 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9dqdw_09b18f4c-b94f-4dff-b191-639e7734adb4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:31 crc kubenswrapper[4985]: I0127 09:43:31.843488 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hp4rl_c987829b-664b-4613-a0d2-04bc23b2c0bb/ssh-known-hosts-edpm-deployment/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.048303 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68b69f7bc7-n2dvz_5d8c24c8-a66b-4fbd-a9fa-96863c29880e/proxy-server/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.098341 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68b69f7bc7-n2dvz_5d8c24c8-a66b-4fbd-a9fa-96863c29880e/proxy-httpd/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.164951 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nzqqd_0c0c0d06-870e-469e-bacf-dc5aa8af9d3b/swift-ring-rebalance/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.358935 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/account-reaper/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.374782 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/account-auditor/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.419491 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/account-replicator/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.550798 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/container-auditor/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.565022 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/account-server/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.658027 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/container-replicator/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.766396 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/container-updater/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.782504 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/container-server/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.811359 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/object-auditor/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.992330 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/object-expirer/0.log" Jan 27 09:43:32 crc kubenswrapper[4985]: I0127 09:43:32.998711 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/object-replicator/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.029287 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/object-server/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.033946 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/object-updater/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.224299 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/rsync/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.238600 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_50364737-e2dc-4bd7-ba5a-97f39e232236/swift-recon-cron/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.396792 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cncpv_e1648956-4ef8-425b-afd7-573f09da0342/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.516196 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b1f74192-c081-4846-b517-32d8d4c8245f/tempest-tests-tempest-tests-runner/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.657110 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a91bdac9-513f-4cb9-b009-1b2bb6253902/test-operator-logs-container/0.log" Jan 27 09:43:33 crc kubenswrapper[4985]: I0127 09:43:33.759458 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dc4cm_aec3b5ab-cd2a-4a78-a971-3b7624c42450/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 09:43:36 crc kubenswrapper[4985]: I0127 09:43:36.782138 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb24812f-9480-484a-9f96-22d35c1a63d2/memcached/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.166496 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/util/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.355656 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/util/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.364173 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/pull/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.404505 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/pull/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.568554 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/util/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.577912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/extract/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.598568 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfg66r6_125d6a2f-ae94-4748-8c5d-b3788983b9c7/pull/0.log" Jan 27 09:44:02 crc kubenswrapper[4985]: I0127 09:44:02.923724 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5fdc687f5-gbl76_62b33436-aec3-4e07-a880-cefb55ec47be/manager/0.log" Jan 27 09:44:03 crc kubenswrapper[4985]: I0127 09:44:03.071716 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-76d4d5b8f9-7r9cn_9aed123f-6fb0-4c65-ac80-e926677d5ecc/manager/0.log" Jan 27 09:44:03 crc kubenswrapper[4985]: I0127 09:44:03.384229 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84d5bb46b-tgqr5_f3a7eea8-cdc7-40d1-a558-2ba1606c646a/manager/0.log" Jan 27 09:44:03 crc kubenswrapper[4985]: I0127 09:44:03.451411 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-lgpgn_c9b11814-36a8-4736-b144-358d8f2c7268/manager/0.log" Jan 27 09:44:03 crc kubenswrapper[4985]: I0127 09:44:03.637553 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-v7nj2_d51fc084-83b4-4f09-baa5-59842d67853e/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.016546 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58865f87b4-v6n6c_150467a4-4f48-49a7-9356-05b11babc187/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.371577 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78f8b7b89c-g9wnq_d0f751b3-5f7d-4756-b959-960ebca3eeaf/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.448712 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-54sgw_457e511d-a1e8-453d-adfb-68177508f318/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.571505 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78b8f8fd84-jjq84_d79a41eb-b6b8-47c6-a14c-e2de4a932377/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.727318 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-khxgm_71fcba69-40b1-4d11-912d-4c52b1a044fe/manager/0.log" Jan 27 09:44:04 crc kubenswrapper[4985]: I0127 09:44:04.957958 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-569695f6c5-4bggc_5ecbd421-8017-44e8-bcf4-4416cb6cd7ad/manager/0.log" Jan 27 09:44:05 crc kubenswrapper[4985]: I0127 09:44:05.166681 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74ffd97575-wvnwv_989c908a-4026-4ebd-9b57-0f9e2701b91a/manager/0.log" Jan 27 09:44:05 crc kubenswrapper[4985]: I0127 09:44:05.475437 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7bd95ffd6dj2sgf_c1774a8d-aed6-4be4-80c3-1182fb0456d3/manager/0.log" Jan 27 09:44:05 crc kubenswrapper[4985]: I0127 09:44:05.995155 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bfcf7b875-b87hw_8a307c4b-92d8-478d-8376-6db40e90a2ae/operator/0.log" Jan 27 09:44:06 crc kubenswrapper[4985]: I0127 09:44:06.279064 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6jwj6_d457eb3e-f84d-4308-b57f-82ac43a05335/registry-server/0.log" Jan 27 09:44:06 crc kubenswrapper[4985]: I0127 09:44:06.580636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-6vbp6_86412e8a-4c97-42a8-a3f8-dca6204f426a/manager/0.log" Jan 27 09:44:06 crc kubenswrapper[4985]: I0127 09:44:06.898094 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7748d79f84-fc5hb_c8880a39-7486-454b-aa9f-0cd2b1148d60/manager/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.135742 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-f8d82_ddc2f9b8-c7e5-4836-805f-e3cb7ef1ca2a/operator/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.194451 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bf4858b78-wcmtg_52d05040-2965-4f90-abc9-558c17a0e37d/manager/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.351995 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-65596dbf77-hwbtq_e47ed97b-bb77-4d2a-899e-87c657f316d7/manager/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.521069 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7db57dc8bf-bv8sl_12271ff4-e21a-43f4-8995-9e9257e11067/manager/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.627241 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-phnxc_e5c46366-d781-46fe-a7c1-d43fd82c4259/manager/0.log" Jan 27 09:44:07 crc kubenswrapper[4985]: I0127 09:44:07.882312 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6476466c7c-tc8kq_c59bd9e5-d547-4c13-ad44-36984b2c7b7e/manager/0.log" Jan 27 09:44:08 crc kubenswrapper[4985]: I0127 09:44:08.327954 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76958f4d87-lxlgc_73122c6c-2af8-4661-b823-4525cb1e675e/manager/0.log" Jan 27 09:44:10 crc kubenswrapper[4985]: I0127 09:44:10.865427 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75b8f798ff-xhnxv_451307f1-5d15-45d9-86c3-d45dc628d159/manager/0.log" Jan 27 09:44:30 crc kubenswrapper[4985]: I0127 09:44:30.899823 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-lcx4s_d6088e48-728e-4a96-b305-c7f86d9fe9f4/control-plane-machine-set-operator/0.log" Jan 27 09:44:31 crc kubenswrapper[4985]: I0127 09:44:31.165147 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bskcz_a9f39981-0c5b-4358-a7f7-41165d56405b/kube-rbac-proxy/0.log" Jan 27 09:44:31 crc kubenswrapper[4985]: I0127 09:44:31.220673 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bskcz_a9f39981-0c5b-4358-a7f7-41165d56405b/machine-api-operator/0.log" Jan 27 09:44:41 crc kubenswrapper[4985]: I0127 09:44:41.828338 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:44:41 crc kubenswrapper[4985]: I0127 09:44:41.829320 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:44:45 crc kubenswrapper[4985]: I0127 09:44:45.865137 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gzn5m_1b8dc334-24d3-4a99-8130-e07eb6d70ea5/cert-manager-controller/0.log" Jan 27 09:44:46 crc kubenswrapper[4985]: I0127 09:44:46.141869 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-h9fn8_382caf51-f3a3-47a2-adf8-e8d2387d245a/cert-manager-cainjector/0.log" Jan 27 09:44:46 crc kubenswrapper[4985]: I0127 09:44:46.305266 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t2dhh_481458b3-b470-4d73-b0bf-9053f8605b8a/cert-manager-webhook/0.log" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.172690 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t"] Jan 27 09:45:00 crc kubenswrapper[4985]: E0127 09:45:00.173786 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e409031-f539-4822-89f5-9f4246671be9" containerName="container-00" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.173802 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e409031-f539-4822-89f5-9f4246671be9" containerName="container-00" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.174080 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e409031-f539-4822-89f5-9f4246671be9" containerName="container-00" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.177860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.188874 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.189430 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.193210 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t"] Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.295486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.295779 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99tj\" (UniqueName: \"kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.296490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.399258 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99tj\" (UniqueName: \"kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.399432 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.399461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.401944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.411700 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.431302 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99tj\" (UniqueName: \"kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj\") pod \"collect-profiles-29491785-lhj2t\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:00 crc kubenswrapper[4985]: I0127 09:45:00.503273 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:01 crc kubenswrapper[4985]: I0127 09:45:01.013412 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t"] Jan 27 09:45:01 crc kubenswrapper[4985]: I0127 09:45:01.109255 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" event={"ID":"e450f7e6-a48d-41cb-9a0f-6e037f9815e3","Type":"ContainerStarted","Data":"1c4298f0da0fcf1f4e284837147b5b26ec2058416046766a7430bb8ecd99100b"} Jan 27 09:45:02 crc kubenswrapper[4985]: I0127 09:45:02.128279 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" event={"ID":"e450f7e6-a48d-41cb-9a0f-6e037f9815e3","Type":"ContainerDied","Data":"67140ff417e2aee48c23457566072b4a1dc4a170c69896d39526af6eab2b9560"} Jan 27 09:45:02 crc kubenswrapper[4985]: I0127 09:45:02.128698 4985 generic.go:334] "Generic (PLEG): container finished" podID="e450f7e6-a48d-41cb-9a0f-6e037f9815e3" containerID="67140ff417e2aee48c23457566072b4a1dc4a170c69896d39526af6eab2b9560" exitCode=0 Jan 27 09:45:02 crc kubenswrapper[4985]: I0127 09:45:02.925672 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-fgkxr_effff8aa-ef99-4976-b4fc-da8a1d1ab03f/nmstate-console-plugin/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.209795 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8thh5_c154b891-72be-4911-93e9-3abe7346c05a/nmstate-handler/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.268164 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-kzgx2_b65e2660-2543-4137-b394-e0a2b19a17c8/kube-rbac-proxy/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.331107 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-kzgx2_b65e2660-2543-4137-b394-e0a2b19a17c8/nmstate-metrics/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.495157 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cxv4p_d85df060-af54-4914-9ae7-dd1d6e0a66f1/nmstate-operator/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.572382 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.586744 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rs7zj_19081401-9a2f-458b-a902-d15f4f915e1c/nmstate-webhook/0.log" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.680666 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99tj\" (UniqueName: \"kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj\") pod \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.680821 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume\") pod \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.681004 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume\") pod \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\" (UID: \"e450f7e6-a48d-41cb-9a0f-6e037f9815e3\") " Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.681637 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "e450f7e6-a48d-41cb-9a0f-6e037f9815e3" (UID: "e450f7e6-a48d-41cb-9a0f-6e037f9815e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.688547 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e450f7e6-a48d-41cb-9a0f-6e037f9815e3" (UID: "e450f7e6-a48d-41cb-9a0f-6e037f9815e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.689770 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj" (OuterVolumeSpecName: "kube-api-access-q99tj") pod "e450f7e6-a48d-41cb-9a0f-6e037f9815e3" (UID: "e450f7e6-a48d-41cb-9a0f-6e037f9815e3"). InnerVolumeSpecName "kube-api-access-q99tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.783716 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99tj\" (UniqueName: \"kubernetes.io/projected/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-kube-api-access-q99tj\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.783913 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:03 crc kubenswrapper[4985]: I0127 09:45:03.783971 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e450f7e6-a48d-41cb-9a0f-6e037f9815e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:04 crc kubenswrapper[4985]: I0127 09:45:04.146143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" event={"ID":"e450f7e6-a48d-41cb-9a0f-6e037f9815e3","Type":"ContainerDied","Data":"1c4298f0da0fcf1f4e284837147b5b26ec2058416046766a7430bb8ecd99100b"} Jan 27 09:45:04 crc kubenswrapper[4985]: I0127 09:45:04.146447 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c4298f0da0fcf1f4e284837147b5b26ec2058416046766a7430bb8ecd99100b" Jan 27 09:45:04 crc kubenswrapper[4985]: I0127 09:45:04.146280 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-lhj2t" Jan 27 09:45:04 crc kubenswrapper[4985]: I0127 09:45:04.661534 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9"] Jan 27 09:45:04 crc kubenswrapper[4985]: I0127 09:45:04.679578 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-bmqp9"] Jan 27 09:45:06 crc kubenswrapper[4985]: I0127 09:45:06.494285 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658aa687-e743-496d-8f4e-ea241c303e72" path="/var/lib/kubelet/pods/658aa687-e743-496d-8f4e-ea241c303e72/volumes" Jan 27 09:45:11 crc kubenswrapper[4985]: I0127 09:45:11.828030 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:45:11 crc kubenswrapper[4985]: I0127 09:45:11.828919 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:45:37 crc kubenswrapper[4985]: I0127 09:45:37.250667 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k8hcc_2cf49ce6-0799-4fb6-bf66-ff00cdf92c44/kube-rbac-proxy/0.log" Jan 27 09:45:37 crc kubenswrapper[4985]: I0127 09:45:37.300781 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k8hcc_2cf49ce6-0799-4fb6-bf66-ff00cdf92c44/controller/0.log" Jan 27 09:45:37 crc kubenswrapper[4985]: I0127 09:45:37.463147 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-frr-files/0.log" Jan 27 09:45:37 crc kubenswrapper[4985]: I0127 09:45:37.964765 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-metrics/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.014095 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-reloader/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.041323 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-reloader/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.056489 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-frr-files/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.167463 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-frr-files/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.226622 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-reloader/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.303595 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-metrics/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.304567 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-metrics/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.494499 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-frr-files/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.529795 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-reloader/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.572683 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/controller/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.580182 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/cp-metrics/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.721329 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/frr-metrics/0.log" Jan 27 09:45:38 crc kubenswrapper[4985]: I0127 09:45:38.792200 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/kube-rbac-proxy-frr/0.log" Jan 27 09:45:39 crc kubenswrapper[4985]: I0127 09:45:39.359812 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/kube-rbac-proxy/0.log" Jan 27 09:45:39 crc kubenswrapper[4985]: I0127 09:45:39.372748 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/reloader/0.log" Jan 27 09:45:39 crc kubenswrapper[4985]: I0127 09:45:39.657335 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qnpz5_f9a30d10-2c17-412f-bb84-dd9bc6bf4487/frr-k8s-webhook-server/0.log" Jan 27 09:45:39 crc kubenswrapper[4985]: I0127 09:45:39.831702 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85f9fcd7-r846b_d031825c-2ec3-42ab-825a-25a071b0c80b/manager/0.log" Jan 27 09:45:39 crc kubenswrapper[4985]: I0127 09:45:39.980441 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-867db48886-7gj8n_c45cc1f7-8ab2-44c6-82de-bb59d24163b7/webhook-server/0.log" Jan 27 09:45:40 crc kubenswrapper[4985]: I0127 09:45:40.192244 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nh6c4_4bc4a510-3768-4111-8646-d3d4a0d6a70e/kube-rbac-proxy/0.log" Jan 27 09:45:40 crc kubenswrapper[4985]: I0127 09:45:40.779427 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nh6c4_4bc4a510-3768-4111-8646-d3d4a0d6a70e/speaker/0.log" Jan 27 09:45:40 crc kubenswrapper[4985]: I0127 09:45:40.948488 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qhg7r_d546a725-d293-47b7-a9c6-92988ba0060d/frr/0.log" Jan 27 09:45:41 crc kubenswrapper[4985]: I0127 09:45:41.828391 4985 patch_prober.go:28] interesting pod/machine-config-daemon-lp9n5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:45:41 crc kubenswrapper[4985]: I0127 09:45:41.828468 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:45:41 crc kubenswrapper[4985]: I0127 09:45:41.828550 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" Jan 27 09:45:41 crc kubenswrapper[4985]: I0127 09:45:41.829491 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468"} pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:45:41 crc kubenswrapper[4985]: I0127 09:45:41.829592 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerName="machine-config-daemon" containerID="cri-o://c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" gracePeriod=600 Jan 27 09:45:41 crc kubenswrapper[4985]: E0127 09:45:41.965383 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:45:42 crc kubenswrapper[4985]: I0127 09:45:42.541390 4985 generic.go:334] "Generic (PLEG): container finished" podID="c066dd2f-48d4-4f4f-935d-0e772678e610" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" exitCode=0 Jan 27 09:45:42 crc kubenswrapper[4985]: I0127 09:45:42.541465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerDied","Data":"c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468"} Jan 27 09:45:42 crc kubenswrapper[4985]: I0127 09:45:42.541564 4985 scope.go:117] "RemoveContainer" containerID="5801812ba7b1ddbd191c16674ba87e9d6a4ebe89965a9b3900525a37925380ca" Jan 27 09:45:42 crc kubenswrapper[4985]: I0127 09:45:42.542532 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:45:42 crc kubenswrapper[4985]: E0127 09:45:42.542883 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:45:45 crc kubenswrapper[4985]: I0127 09:45:45.495665 4985 scope.go:117] "RemoveContainer" containerID="afb3aadd57d9e1949ca538848d23977b32a10fc783dcc295f34050f038ff7b2e" Jan 27 09:45:56 crc kubenswrapper[4985]: I0127 09:45:56.655818 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/util/0.log" Jan 27 09:45:56 crc kubenswrapper[4985]: I0127 09:45:56.893597 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/util/0.log" Jan 27 09:45:56 crc kubenswrapper[4985]: I0127 09:45:56.897491 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/pull/0.log" Jan 27 09:45:56 crc kubenswrapper[4985]: I0127 09:45:56.922439 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/pull/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.149620 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/pull/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.156092 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/util/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.209969 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnng2r_81071259-96da-4e05-a63b-b0e5544489ec/extract/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.343691 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/util/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.453115 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:45:57 crc kubenswrapper[4985]: E0127 09:45:57.453550 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.524184 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/util/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.540083 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/pull/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.558967 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/pull/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.811791 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/pull/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.812093 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/util/0.log" Jan 27 09:45:57 crc kubenswrapper[4985]: I0127 09:45:57.822553 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pzqfw_efbd0472-2548-4ceb-8f40-a8586eb223e2/extract/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.032914 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-utilities/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.232452 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-content/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.255967 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-utilities/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.258678 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-content/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.428664 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-utilities/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.435406 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/extract-content/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.681152 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-utilities/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.930570 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-utilities/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.943593 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-content/0.log" Jan 27 09:45:58 crc kubenswrapper[4985]: I0127 09:45:58.962424 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8tbrf_f87933b8-24d4-4124-9902-29626502bb84/registry-server/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.020844 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-content/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.151068 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-utilities/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.163602 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/extract-content/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.418826 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zm8w2_795be290-3151-45f3-bdba-4a054aec68d9/marketplace-operator/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.576142 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-utilities/0.log" Jan 27 09:45:59 crc kubenswrapper[4985]: I0127 09:45:59.706020 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sqmzg_3d5830bf-84c1-46df-88b1-72400d395500/registry-server/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.013873 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-content/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.065440 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-utilities/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.074037 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-content/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.326187 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-utilities/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.328698 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/extract-content/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.448240 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qcvb7_56c8c3f8-8727-4ae8-9e43-34fc282cbf9d/registry-server/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.547879 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-utilities/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.745896 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-content/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.752700 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-utilities/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.763139 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-content/0.log" Jan 27 09:46:00 crc kubenswrapper[4985]: I0127 09:46:00.998372 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-content/0.log" Jan 27 09:46:01 crc kubenswrapper[4985]: I0127 09:46:01.022059 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/extract-utilities/0.log" Jan 27 09:46:01 crc kubenswrapper[4985]: I0127 09:46:01.289548 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxpvj_4335a2c0-14aa-4423-8527-22a5fe08f48d/registry-server/0.log" Jan 27 09:46:11 crc kubenswrapper[4985]: I0127 09:46:11.452718 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:46:11 crc kubenswrapper[4985]: E0127 09:46:11.454289 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:46:22 crc kubenswrapper[4985]: I0127 09:46:22.453251 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:46:22 crc kubenswrapper[4985]: E0127 09:46:22.454250 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:46:33 crc kubenswrapper[4985]: I0127 09:46:33.452683 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:46:33 crc kubenswrapper[4985]: E0127 09:46:33.454571 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:46:48 crc kubenswrapper[4985]: I0127 09:46:48.452590 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:46:48 crc kubenswrapper[4985]: E0127 09:46:48.453623 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.424396 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:46:50 crc kubenswrapper[4985]: E0127 09:46:50.425408 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e450f7e6-a48d-41cb-9a0f-6e037f9815e3" containerName="collect-profiles" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.425422 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e450f7e6-a48d-41cb-9a0f-6e037f9815e3" containerName="collect-profiles" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.425632 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e450f7e6-a48d-41cb-9a0f-6e037f9815e3" containerName="collect-profiles" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.426986 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.496382 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.552614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.552851 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.553461 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gtm\" (UniqueName: \"kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.655573 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67gtm\" (UniqueName: \"kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.655704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.655778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.656361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.656443 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.684818 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67gtm\" (UniqueName: \"kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm\") pod \"certified-operators-txr4w\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:50 crc kubenswrapper[4985]: I0127 09:46:50.760119 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:46:51 crc kubenswrapper[4985]: I0127 09:46:51.336441 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:46:51 crc kubenswrapper[4985]: I0127 09:46:51.684107 4985 generic.go:334] "Generic (PLEG): container finished" podID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerID="f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857" exitCode=0 Jan 27 09:46:51 crc kubenswrapper[4985]: I0127 09:46:51.684479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerDied","Data":"f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857"} Jan 27 09:46:51 crc kubenswrapper[4985]: I0127 09:46:51.684534 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerStarted","Data":"605861f65842afe3da9f358aa0fc673622ab78f100da3d7697e9c09f3bfedaac"} Jan 27 09:46:51 crc kubenswrapper[4985]: I0127 09:46:51.686626 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:46:52 crc kubenswrapper[4985]: I0127 09:46:52.712230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerStarted","Data":"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0"} Jan 27 09:46:54 crc kubenswrapper[4985]: I0127 09:46:54.750899 4985 generic.go:334] "Generic (PLEG): container finished" podID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerID="52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0" exitCode=0 Jan 27 09:46:54 crc kubenswrapper[4985]: I0127 09:46:54.751476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerDied","Data":"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0"} Jan 27 09:46:55 crc kubenswrapper[4985]: I0127 09:46:55.764851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerStarted","Data":"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b"} Jan 27 09:46:55 crc kubenswrapper[4985]: I0127 09:46:55.799632 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txr4w" podStartSLOduration=2.21828951 podStartE2EDuration="5.799597278s" podCreationTimestamp="2026-01-27 09:46:50 +0000 UTC" firstStartedPulling="2026-01-27 09:46:51.686404464 +0000 UTC m=+3195.977499305" lastFinishedPulling="2026-01-27 09:46:55.267712192 +0000 UTC m=+3199.558807073" observedRunningTime="2026-01-27 09:46:55.79893928 +0000 UTC m=+3200.090034121" watchObservedRunningTime="2026-01-27 09:46:55.799597278 +0000 UTC m=+3200.090692129" Jan 27 09:47:00 crc kubenswrapper[4985]: I0127 09:47:00.454050 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:47:00 crc kubenswrapper[4985]: E0127 09:47:00.456573 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:47:00 crc kubenswrapper[4985]: I0127 09:47:00.760632 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:00 crc kubenswrapper[4985]: I0127 09:47:00.760733 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:00 crc kubenswrapper[4985]: I0127 09:47:00.831469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:00 crc kubenswrapper[4985]: I0127 09:47:00.911701 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:01 crc kubenswrapper[4985]: I0127 09:47:01.081488 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:47:02 crc kubenswrapper[4985]: I0127 09:47:02.838070 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txr4w" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="registry-server" containerID="cri-o://910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b" gracePeriod=2 Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.327440 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.498705 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67gtm\" (UniqueName: \"kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm\") pod \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.498971 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content\") pod \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.499056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities\") pod \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\" (UID: \"6f3374fc-a3a2-4514-a837-7c7496fca0ed\") " Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.500623 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities" (OuterVolumeSpecName: "utilities") pod "6f3374fc-a3a2-4514-a837-7c7496fca0ed" (UID: "6f3374fc-a3a2-4514-a837-7c7496fca0ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.522225 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm" (OuterVolumeSpecName: "kube-api-access-67gtm") pod "6f3374fc-a3a2-4514-a837-7c7496fca0ed" (UID: "6f3374fc-a3a2-4514-a837-7c7496fca0ed"). InnerVolumeSpecName "kube-api-access-67gtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.566796 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f3374fc-a3a2-4514-a837-7c7496fca0ed" (UID: "6f3374fc-a3a2-4514-a837-7c7496fca0ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.601559 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.601758 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67gtm\" (UniqueName: \"kubernetes.io/projected/6f3374fc-a3a2-4514-a837-7c7496fca0ed-kube-api-access-67gtm\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.601819 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f3374fc-a3a2-4514-a837-7c7496fca0ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.852751 4985 generic.go:334] "Generic (PLEG): container finished" podID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerID="910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b" exitCode=0 Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.852837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerDied","Data":"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b"} Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.855292 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txr4w" event={"ID":"6f3374fc-a3a2-4514-a837-7c7496fca0ed","Type":"ContainerDied","Data":"605861f65842afe3da9f358aa0fc673622ab78f100da3d7697e9c09f3bfedaac"} Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.855456 4985 scope.go:117] "RemoveContainer" containerID="910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.852860 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txr4w" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.882212 4985 scope.go:117] "RemoveContainer" containerID="52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.926874 4985 scope.go:117] "RemoveContainer" containerID="f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.927158 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.938468 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txr4w"] Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.991982 4985 scope.go:117] "RemoveContainer" containerID="910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b" Jan 27 09:47:03 crc kubenswrapper[4985]: E0127 09:47:03.992494 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b\": container with ID starting with 910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b not found: ID does not exist" containerID="910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.992563 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b"} err="failed to get container status \"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b\": rpc error: code = NotFound desc = could not find container \"910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b\": container with ID starting with 910082ea29ff0b49fa52aac0f1fe7e0f5e5eaa6b317d21ea2b78b9fc4e6ac69b not found: ID does not exist" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.992596 4985 scope.go:117] "RemoveContainer" containerID="52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0" Jan 27 09:47:03 crc kubenswrapper[4985]: E0127 09:47:03.993268 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0\": container with ID starting with 52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0 not found: ID does not exist" containerID="52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.993307 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0"} err="failed to get container status \"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0\": rpc error: code = NotFound desc = could not find container \"52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0\": container with ID starting with 52241f067ad5f110075b02d46cdec091fccefd0f5f4aad6dfea9ff46f8e157d0 not found: ID does not exist" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.993342 4985 scope.go:117] "RemoveContainer" containerID="f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857" Jan 27 09:47:03 crc kubenswrapper[4985]: E0127 09:47:03.993855 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857\": container with ID starting with f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857 not found: ID does not exist" containerID="f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857" Jan 27 09:47:03 crc kubenswrapper[4985]: I0127 09:47:03.993886 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857"} err="failed to get container status \"f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857\": rpc error: code = NotFound desc = could not find container \"f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857\": container with ID starting with f539489163609fbb327a61b747a1becbfd627482acdd70280e778e137ef05857 not found: ID does not exist" Jan 27 09:47:04 crc kubenswrapper[4985]: I0127 09:47:04.472729 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" path="/var/lib/kubelet/pods/6f3374fc-a3a2-4514-a837-7c7496fca0ed/volumes" Jan 27 09:47:12 crc kubenswrapper[4985]: I0127 09:47:12.453821 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:47:12 crc kubenswrapper[4985]: E0127 09:47:12.455093 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.879923 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:25 crc kubenswrapper[4985]: E0127 09:47:25.881330 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="extract-utilities" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.881355 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="extract-utilities" Jan 27 09:47:25 crc kubenswrapper[4985]: E0127 09:47:25.881372 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="registry-server" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.881410 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="registry-server" Jan 27 09:47:25 crc kubenswrapper[4985]: E0127 09:47:25.881459 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="extract-content" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.881472 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="extract-content" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.881804 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3374fc-a3a2-4514-a837-7c7496fca0ed" containerName="registry-server" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.889505 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.961260 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.961832 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxps2\" (UniqueName: \"kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.966183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:25 crc kubenswrapper[4985]: I0127 09:47:25.974045 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.068822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.068993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.069099 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxps2\" (UniqueName: \"kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.069570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.069903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.095192 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxps2\" (UniqueName: \"kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2\") pod \"redhat-operators-l8zjq\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.267778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:26 crc kubenswrapper[4985]: I0127 09:47:26.791005 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:27 crc kubenswrapper[4985]: I0127 09:47:27.144637 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerID="15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a" exitCode=0 Jan 27 09:47:27 crc kubenswrapper[4985]: I0127 09:47:27.144750 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerDied","Data":"15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a"} Jan 27 09:47:27 crc kubenswrapper[4985]: I0127 09:47:27.145956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerStarted","Data":"42908084d9eddd74eb15e32439bc2f2dbf7e155b1e29026fd6cf8bd3a3e4a86a"} Jan 27 09:47:27 crc kubenswrapper[4985]: I0127 09:47:27.452372 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:47:27 crc kubenswrapper[4985]: E0127 09:47:27.452671 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:47:28 crc kubenswrapper[4985]: I0127 09:47:28.160479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerStarted","Data":"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b"} Jan 27 09:47:33 crc kubenswrapper[4985]: I0127 09:47:33.234757 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerID="0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b" exitCode=0 Jan 27 09:47:33 crc kubenswrapper[4985]: I0127 09:47:33.234858 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerDied","Data":"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b"} Jan 27 09:47:34 crc kubenswrapper[4985]: I0127 09:47:34.272649 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerStarted","Data":"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab"} Jan 27 09:47:34 crc kubenswrapper[4985]: I0127 09:47:34.303734 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8zjq" podStartSLOduration=2.707744135 podStartE2EDuration="9.303706579s" podCreationTimestamp="2026-01-27 09:47:25 +0000 UTC" firstStartedPulling="2026-01-27 09:47:27.148278838 +0000 UTC m=+3231.439373679" lastFinishedPulling="2026-01-27 09:47:33.744241272 +0000 UTC m=+3238.035336123" observedRunningTime="2026-01-27 09:47:34.297301734 +0000 UTC m=+3238.588396615" watchObservedRunningTime="2026-01-27 09:47:34.303706579 +0000 UTC m=+3238.594801430" Jan 27 09:47:36 crc kubenswrapper[4985]: I0127 09:47:36.270099 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:36 crc kubenswrapper[4985]: I0127 09:47:36.270677 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:37 crc kubenswrapper[4985]: I0127 09:47:37.328615 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l8zjq" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="registry-server" probeResult="failure" output=< Jan 27 09:47:37 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 27 09:47:37 crc kubenswrapper[4985]: > Jan 27 09:47:38 crc kubenswrapper[4985]: I0127 09:47:38.452171 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:47:38 crc kubenswrapper[4985]: E0127 09:47:38.452670 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:47:46 crc kubenswrapper[4985]: I0127 09:47:46.336685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:46 crc kubenswrapper[4985]: I0127 09:47:46.503489 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:46 crc kubenswrapper[4985]: I0127 09:47:46.590454 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:47 crc kubenswrapper[4985]: I0127 09:47:47.472268 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8zjq" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="registry-server" containerID="cri-o://32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab" gracePeriod=2 Jan 27 09:47:47 crc kubenswrapper[4985]: E0127 09:47:47.722733 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d23aded_c2ea_44d7_8b0d_2b0572a637b4.slice/crio-conmon-32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.049805 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.227867 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content\") pod \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.228029 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxps2\" (UniqueName: \"kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2\") pod \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.228088 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities\") pod \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\" (UID: \"2d23aded-c2ea-44d7-8b0d-2b0572a637b4\") " Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.229816 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities" (OuterVolumeSpecName: "utilities") pod "2d23aded-c2ea-44d7-8b0d-2b0572a637b4" (UID: "2d23aded-c2ea-44d7-8b0d-2b0572a637b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.238790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2" (OuterVolumeSpecName: "kube-api-access-wxps2") pod "2d23aded-c2ea-44d7-8b0d-2b0572a637b4" (UID: "2d23aded-c2ea-44d7-8b0d-2b0572a637b4"). InnerVolumeSpecName "kube-api-access-wxps2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.330801 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxps2\" (UniqueName: \"kubernetes.io/projected/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-kube-api-access-wxps2\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.330840 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.358280 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d23aded-c2ea-44d7-8b0d-2b0572a637b4" (UID: "2d23aded-c2ea-44d7-8b0d-2b0572a637b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.435146 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d23aded-c2ea-44d7-8b0d-2b0572a637b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.498045 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerID="32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab" exitCode=0 Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.498110 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerDied","Data":"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab"} Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.498182 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8zjq" event={"ID":"2d23aded-c2ea-44d7-8b0d-2b0572a637b4","Type":"ContainerDied","Data":"42908084d9eddd74eb15e32439bc2f2dbf7e155b1e29026fd6cf8bd3a3e4a86a"} Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.498225 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8zjq" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.498243 4985 scope.go:117] "RemoveContainer" containerID="32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.545652 4985 scope.go:117] "RemoveContainer" containerID="0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.546887 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.562530 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8zjq"] Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.585777 4985 scope.go:117] "RemoveContainer" containerID="15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.623120 4985 scope.go:117] "RemoveContainer" containerID="32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab" Jan 27 09:47:48 crc kubenswrapper[4985]: E0127 09:47:48.624668 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab\": container with ID starting with 32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab not found: ID does not exist" containerID="32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.624794 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab"} err="failed to get container status \"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab\": rpc error: code = NotFound desc = could not find container \"32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab\": container with ID starting with 32819554bf7316be700d7ecc04032081cf498bc9124d88733f4bde215347b1ab not found: ID does not exist" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.624834 4985 scope.go:117] "RemoveContainer" containerID="0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b" Jan 27 09:47:48 crc kubenswrapper[4985]: E0127 09:47:48.625366 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b\": container with ID starting with 0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b not found: ID does not exist" containerID="0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.625401 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b"} err="failed to get container status \"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b\": rpc error: code = NotFound desc = could not find container \"0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b\": container with ID starting with 0f37cc1d040aecd719ad2a7b3a664c46f695f096f438cc90dcce7cb7f90e541b not found: ID does not exist" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.625415 4985 scope.go:117] "RemoveContainer" containerID="15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a" Jan 27 09:47:48 crc kubenswrapper[4985]: E0127 09:47:48.625862 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a\": container with ID starting with 15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a not found: ID does not exist" containerID="15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a" Jan 27 09:47:48 crc kubenswrapper[4985]: I0127 09:47:48.625910 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a"} err="failed to get container status \"15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a\": rpc error: code = NotFound desc = could not find container \"15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a\": container with ID starting with 15cec7dc849cbfd027ca8d126a7bf994e53da206d2d8f56f2de4a4569677013a not found: ID does not exist" Jan 27 09:47:50 crc kubenswrapper[4985]: I0127 09:47:50.473673 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" path="/var/lib/kubelet/pods/2d23aded-c2ea-44d7-8b0d-2b0572a637b4/volumes" Jan 27 09:47:52 crc kubenswrapper[4985]: I0127 09:47:52.454484 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:47:52 crc kubenswrapper[4985]: E0127 09:47:52.454857 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:47:52 crc kubenswrapper[4985]: I0127 09:47:52.559284 4985 generic.go:334] "Generic (PLEG): container finished" podID="d344afb8-eaf7-4dad-aea2-894652e26583" containerID="46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5" exitCode=0 Jan 27 09:47:52 crc kubenswrapper[4985]: I0127 09:47:52.559368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5fxzf/must-gather-krks8" event={"ID":"d344afb8-eaf7-4dad-aea2-894652e26583","Type":"ContainerDied","Data":"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5"} Jan 27 09:47:52 crc kubenswrapper[4985]: I0127 09:47:52.560450 4985 scope.go:117] "RemoveContainer" containerID="46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5" Jan 27 09:47:52 crc kubenswrapper[4985]: I0127 09:47:52.688000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5fxzf_must-gather-krks8_d344afb8-eaf7-4dad-aea2-894652e26583/gather/0.log" Jan 27 09:48:01 crc kubenswrapper[4985]: I0127 09:48:01.744423 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5fxzf/must-gather-krks8"] Jan 27 09:48:01 crc kubenswrapper[4985]: I0127 09:48:01.746062 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5fxzf/must-gather-krks8" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="copy" containerID="cri-o://2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd" gracePeriod=2 Jan 27 09:48:01 crc kubenswrapper[4985]: I0127 09:48:01.773723 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5fxzf/must-gather-krks8"] Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.452411 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5fxzf_must-gather-krks8_d344afb8-eaf7-4dad-aea2-894652e26583/copy/0.log" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.453317 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.610139 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output\") pod \"d344afb8-eaf7-4dad-aea2-894652e26583\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.610281 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl44b\" (UniqueName: \"kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b\") pod \"d344afb8-eaf7-4dad-aea2-894652e26583\" (UID: \"d344afb8-eaf7-4dad-aea2-894652e26583\") " Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.621213 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b" (OuterVolumeSpecName: "kube-api-access-vl44b") pod "d344afb8-eaf7-4dad-aea2-894652e26583" (UID: "d344afb8-eaf7-4dad-aea2-894652e26583"). InnerVolumeSpecName "kube-api-access-vl44b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.696449 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5fxzf_must-gather-krks8_d344afb8-eaf7-4dad-aea2-894652e26583/copy/0.log" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.696803 4985 generic.go:334] "Generic (PLEG): container finished" podID="d344afb8-eaf7-4dad-aea2-894652e26583" containerID="2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd" exitCode=143 Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.696897 4985 scope.go:117] "RemoveContainer" containerID="2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.696925 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5fxzf/must-gather-krks8" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.721502 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl44b\" (UniqueName: \"kubernetes.io/projected/d344afb8-eaf7-4dad-aea2-894652e26583-kube-api-access-vl44b\") on node \"crc\" DevicePath \"\"" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.738716 4985 scope.go:117] "RemoveContainer" containerID="46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.860214 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d344afb8-eaf7-4dad-aea2-894652e26583" (UID: "d344afb8-eaf7-4dad-aea2-894652e26583"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.890742 4985 scope.go:117] "RemoveContainer" containerID="2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd" Jan 27 09:48:02 crc kubenswrapper[4985]: E0127 09:48:02.891471 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd\": container with ID starting with 2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd not found: ID does not exist" containerID="2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.891547 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd"} err="failed to get container status \"2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd\": rpc error: code = NotFound desc = could not find container \"2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd\": container with ID starting with 2a3ed8f57b4627af28dcdf4e9a839d28c20446d890e6b6f6443251aad84000fd not found: ID does not exist" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.891598 4985 scope.go:117] "RemoveContainer" containerID="46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5" Jan 27 09:48:02 crc kubenswrapper[4985]: E0127 09:48:02.892144 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5\": container with ID starting with 46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5 not found: ID does not exist" containerID="46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.892191 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5"} err="failed to get container status \"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5\": rpc error: code = NotFound desc = could not find container \"46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5\": container with ID starting with 46beda096095583d4a84a16657ecade49ca1c29cf866b903cc5085c0812da1c5 not found: ID does not exist" Jan 27 09:48:02 crc kubenswrapper[4985]: I0127 09:48:02.925820 4985 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d344afb8-eaf7-4dad-aea2-894652e26583-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 09:48:04 crc kubenswrapper[4985]: I0127 09:48:04.472233 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" path="/var/lib/kubelet/pods/d344afb8-eaf7-4dad-aea2-894652e26583/volumes" Jan 27 09:48:06 crc kubenswrapper[4985]: I0127 09:48:06.458833 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:48:06 crc kubenswrapper[4985]: E0127 09:48:06.460909 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:48:21 crc kubenswrapper[4985]: I0127 09:48:21.453004 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:48:21 crc kubenswrapper[4985]: E0127 09:48:21.453887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:48:32 crc kubenswrapper[4985]: I0127 09:48:32.452553 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:48:32 crc kubenswrapper[4985]: E0127 09:48:32.454164 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:48:43 crc kubenswrapper[4985]: I0127 09:48:43.452816 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:48:43 crc kubenswrapper[4985]: E0127 09:48:43.454374 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:48:55 crc kubenswrapper[4985]: I0127 09:48:55.452309 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:48:55 crc kubenswrapper[4985]: E0127 09:48:55.453919 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:49:10 crc kubenswrapper[4985]: I0127 09:49:10.452311 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:49:10 crc kubenswrapper[4985]: E0127 09:49:10.453381 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:49:24 crc kubenswrapper[4985]: I0127 09:49:24.453583 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:49:24 crc kubenswrapper[4985]: E0127 09:49:24.455174 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:49:35 crc kubenswrapper[4985]: I0127 09:49:35.453064 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:49:35 crc kubenswrapper[4985]: E0127 09:49:35.454364 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.164034 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:40 crc kubenswrapper[4985]: E0127 09:49:40.166208 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="extract-utilities" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166248 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="extract-utilities" Jan 27 09:49:40 crc kubenswrapper[4985]: E0127 09:49:40.166307 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="copy" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166328 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="copy" Jan 27 09:49:40 crc kubenswrapper[4985]: E0127 09:49:40.166399 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="gather" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166418 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="gather" Jan 27 09:49:40 crc kubenswrapper[4985]: E0127 09:49:40.166443 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="extract-content" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166463 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="extract-content" Jan 27 09:49:40 crc kubenswrapper[4985]: E0127 09:49:40.166497 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="registry-server" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166548 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="registry-server" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166940 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="gather" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.166980 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d344afb8-eaf7-4dad-aea2-894652e26583" containerName="copy" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.167015 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d23aded-c2ea-44d7-8b0d-2b0572a637b4" containerName="registry-server" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.170292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.174785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.278344 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhptm\" (UniqueName: \"kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.278531 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.278752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.381544 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhptm\" (UniqueName: \"kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.381667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.381832 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.382203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.382332 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.403257 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhptm\" (UniqueName: \"kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm\") pod \"redhat-marketplace-4stvd\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:40 crc kubenswrapper[4985]: I0127 09:49:40.504657 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:41 crc kubenswrapper[4985]: I0127 09:49:41.019947 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:41 crc kubenswrapper[4985]: I0127 09:49:41.984662 4985 generic.go:334] "Generic (PLEG): container finished" podID="19955a58-8a7b-4413-81b9-5b8f692a9930" containerID="ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc" exitCode=0 Jan 27 09:49:41 crc kubenswrapper[4985]: I0127 09:49:41.984877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerDied","Data":"ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc"} Jan 27 09:49:41 crc kubenswrapper[4985]: I0127 09:49:41.985794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerStarted","Data":"7580a39ac6b7d3df63bb3f3ea4f517bf594f3ab369886dacc33730823c0c7f68"} Jan 27 09:49:42 crc kubenswrapper[4985]: I0127 09:49:42.997984 4985 generic.go:334] "Generic (PLEG): container finished" podID="19955a58-8a7b-4413-81b9-5b8f692a9930" containerID="0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b" exitCode=0 Jan 27 09:49:42 crc kubenswrapper[4985]: I0127 09:49:42.998171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerDied","Data":"0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b"} Jan 27 09:49:44 crc kubenswrapper[4985]: I0127 09:49:44.015967 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerStarted","Data":"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e"} Jan 27 09:49:44 crc kubenswrapper[4985]: I0127 09:49:44.050935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4stvd" podStartSLOduration=2.611198773 podStartE2EDuration="4.050906736s" podCreationTimestamp="2026-01-27 09:49:40 +0000 UTC" firstStartedPulling="2026-01-27 09:49:41.98719609 +0000 UTC m=+3366.278290941" lastFinishedPulling="2026-01-27 09:49:43.426904053 +0000 UTC m=+3367.717998904" observedRunningTime="2026-01-27 09:49:44.038822767 +0000 UTC m=+3368.329917628" watchObservedRunningTime="2026-01-27 09:49:44.050906736 +0000 UTC m=+3368.342001587" Jan 27 09:49:50 crc kubenswrapper[4985]: I0127 09:49:50.452468 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:49:50 crc kubenswrapper[4985]: E0127 09:49:50.453485 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:49:50 crc kubenswrapper[4985]: I0127 09:49:50.505303 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:50 crc kubenswrapper[4985]: I0127 09:49:50.505390 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:50 crc kubenswrapper[4985]: I0127 09:49:50.588494 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:51 crc kubenswrapper[4985]: I0127 09:49:51.181220 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:51 crc kubenswrapper[4985]: I0127 09:49:51.267549 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.131662 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4stvd" podUID="19955a58-8a7b-4413-81b9-5b8f692a9930" containerName="registry-server" containerID="cri-o://c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e" gracePeriod=2 Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.612890 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.723122 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content\") pod \"19955a58-8a7b-4413-81b9-5b8f692a9930\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.723228 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhptm\" (UniqueName: \"kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm\") pod \"19955a58-8a7b-4413-81b9-5b8f692a9930\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.723335 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities\") pod \"19955a58-8a7b-4413-81b9-5b8f692a9930\" (UID: \"19955a58-8a7b-4413-81b9-5b8f692a9930\") " Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.724348 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities" (OuterVolumeSpecName: "utilities") pod "19955a58-8a7b-4413-81b9-5b8f692a9930" (UID: "19955a58-8a7b-4413-81b9-5b8f692a9930"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.731863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm" (OuterVolumeSpecName: "kube-api-access-xhptm") pod "19955a58-8a7b-4413-81b9-5b8f692a9930" (UID: "19955a58-8a7b-4413-81b9-5b8f692a9930"). InnerVolumeSpecName "kube-api-access-xhptm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.757547 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19955a58-8a7b-4413-81b9-5b8f692a9930" (UID: "19955a58-8a7b-4413-81b9-5b8f692a9930"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.826507 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.826594 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhptm\" (UniqueName: \"kubernetes.io/projected/19955a58-8a7b-4413-81b9-5b8f692a9930-kube-api-access-xhptm\") on node \"crc\" DevicePath \"\"" Jan 27 09:49:53 crc kubenswrapper[4985]: I0127 09:49:53.826618 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19955a58-8a7b-4413-81b9-5b8f692a9930-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.150098 4985 generic.go:334] "Generic (PLEG): container finished" podID="19955a58-8a7b-4413-81b9-5b8f692a9930" containerID="c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e" exitCode=0 Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.150167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerDied","Data":"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e"} Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.150214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4stvd" event={"ID":"19955a58-8a7b-4413-81b9-5b8f692a9930","Type":"ContainerDied","Data":"7580a39ac6b7d3df63bb3f3ea4f517bf594f3ab369886dacc33730823c0c7f68"} Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.150244 4985 scope.go:117] "RemoveContainer" containerID="c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.150442 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4stvd" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.196419 4985 scope.go:117] "RemoveContainer" containerID="0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.207281 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.232247 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4stvd"] Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.233432 4985 scope.go:117] "RemoveContainer" containerID="ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.295083 4985 scope.go:117] "RemoveContainer" containerID="c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e" Jan 27 09:49:54 crc kubenswrapper[4985]: E0127 09:49:54.295669 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e\": container with ID starting with c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e not found: ID does not exist" containerID="c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.295745 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e"} err="failed to get container status \"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e\": rpc error: code = NotFound desc = could not find container \"c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e\": container with ID starting with c9b7614d9c7a54bd1d7201f45299019a3dbcbe5fbf53d65e7d2394bbab30c46e not found: ID does not exist" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.295788 4985 scope.go:117] "RemoveContainer" containerID="0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b" Jan 27 09:49:54 crc kubenswrapper[4985]: E0127 09:49:54.296095 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b\": container with ID starting with 0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b not found: ID does not exist" containerID="0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.296128 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b"} err="failed to get container status \"0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b\": rpc error: code = NotFound desc = could not find container \"0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b\": container with ID starting with 0e7f90e4039cb056921801498ffa2b1067e2a82beb0ee17445df37bcd97e681b not found: ID does not exist" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.296146 4985 scope.go:117] "RemoveContainer" containerID="ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc" Jan 27 09:49:54 crc kubenswrapper[4985]: E0127 09:49:54.296462 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc\": container with ID starting with ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc not found: ID does not exist" containerID="ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.296532 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc"} err="failed to get container status \"ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc\": rpc error: code = NotFound desc = could not find container \"ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc\": container with ID starting with ded0d72e6370186c402e599bc3ae296ed31425b75384358912afcde0428c56bc not found: ID does not exist" Jan 27 09:49:54 crc kubenswrapper[4985]: I0127 09:49:54.467492 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19955a58-8a7b-4413-81b9-5b8f692a9930" path="/var/lib/kubelet/pods/19955a58-8a7b-4413-81b9-5b8f692a9930/volumes" Jan 27 09:50:03 crc kubenswrapper[4985]: I0127 09:50:03.453914 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:50:03 crc kubenswrapper[4985]: E0127 09:50:03.455032 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:50:15 crc kubenswrapper[4985]: I0127 09:50:15.452487 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:50:15 crc kubenswrapper[4985]: E0127 09:50:15.453711 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:50:30 crc kubenswrapper[4985]: I0127 09:50:30.453230 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:50:30 crc kubenswrapper[4985]: E0127 09:50:30.454222 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:50:41 crc kubenswrapper[4985]: I0127 09:50:41.453471 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:50:41 crc kubenswrapper[4985]: E0127 09:50:41.455029 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp9n5_openshift-machine-config-operator(c066dd2f-48d4-4f4f-935d-0e772678e610)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" podUID="c066dd2f-48d4-4f4f-935d-0e772678e610" Jan 27 09:50:53 crc kubenswrapper[4985]: I0127 09:50:53.453769 4985 scope.go:117] "RemoveContainer" containerID="c1f1e255695e7c290fa12c0855087584f49fd2bbb4c37a4f7179d53498c81468" Jan 27 09:50:53 crc kubenswrapper[4985]: I0127 09:50:53.952471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp9n5" event={"ID":"c066dd2f-48d4-4f4f-935d-0e772678e610","Type":"ContainerStarted","Data":"e6afef543ddd1ae68cc419aadf63106cd02b69b5d320e1e9d58978817197e1ff"}